IoT Hasan Derhamy Print IV
IoT Hasan Derhamy Print IV
ISSN 1402-1544
For Industrial Internet of Things
ISBN 978-91-7790-128-0 (print)
ISBN 978-91-7790-129-7 (pdf)
Hasan Derhamy
Industrial Electronics
Image on the cover:
Silver Fern (Cyathea dealbata, or from Māori kaponga or ponga ), Auckland,
New Zealand, 2009.
Hasan Derhamy
Supervisors:
Jens Eliasson and Jerker Delsing
Printed by Luleå University of Technology, Graphic Production 2018
ISSN 1402-1544
ISBN 978-91-7790-128-0 (print)
ISBN 978-91-7790-129-7 (pdf)
Luleå 2018
www.ltu.se
To Nana and Grandad and my family
iii
iv
Abstract
As society has progressed through periods of evolution and revolution, technology has
played a key role as an enabler. In the same manner that mechanical machines of the
1800’s drove the industrial revolution, now digitalized machines are driving another one.
With this recognition of a fourth industrial revolution, the Industry 4.0 initiative was
founded. One of the drivers of Industry 4.0 is the Industrial Internet of Things (IIoT).
The IIoT is a consequence of widely present computing ubiquity and interconnected-
ness. Software has become a crucial tool of almost all industries from bakeries and arts
to manufacturing facilities and banking. Programming is now a required competence
and used by a variety of professions. It is not only about algorithm development, it has
become more about engineering and integrating existing designs and tools. This impacts
the way software is architected and drives a large body of research in the area.
Software solutions are becoming more distributed, not only over multiple processes,
but over heterogeneous hardware and business domains. Computing platforms could be
mobile or geographically separated over large distances, exposing the solutions to network
disturbances, performance degradation and security vulnerabilities.
Hence, IIoT introduces complexity on a scale previously unseen in the software in-
dustry. Software architecture must accommodate these heterogeneous domains and com-
petencies and handle the increasing levels of complexity.
This thesis proposes an architectural style for designing IIoT software architectures.
The popular Service Oriented Architecture (SOA) style is not sufficient to define a com-
plete architecture for IIoT applications. SOA fundamental principles are defined as loose
coupling, lookup and late binding. The proposed architecture style extends these SOA
principles with autonomy, specialization, data at its source and first person perspective.
It preserves the benefits of SOA that models functionalities as reusable services with
standardized interfaces. Thus, the proposed style helps to capture the heterogeneity of
IIoT (e.g. systems, capabilities, domains, competencies etc.), while handling challenges
imposed by it. The style also captures resource constraints of IIoT platforms; distri-
bution of application logic across IIoT; dependence between services within IIoT; and
presentation of the solution in various stakeholder perspectives.
The IIoT generates large amounts of data that is subsequently stored, analysed,
archived and eventually fed back into the product life cycle. Centralization of data has
well known challenges. This thesis proposes a method of information extraction based
on the principle of data at its source. Such data preserves implicit context, reducing the
burden of semantic data within the system. Desired information is expressed through
dynamic (runtime) queries. Using the queries, a path is created to retrieve the requested
v
data. It alleviates the need for data to be stored in intermediary nodes; data remains at
the source. Thus, IIoT applications extract information and present it to requesting sys-
tems without redundant source related context. This helps with issues of data ownership,
access control and stale data.
Another IIoT challenge tackled in this work is decentralization of Manufacturing
Execution System (MES). It is motivated by a need to mitigate the impact of vulnerable
shared networks on the factory floor; and by business requirements to reduce dependence
on local factory infrastructure. This thesis explores a solution where functions of MES
are distributed to the workstations that enables them to operate autonomously. Such
autonomous workstations utilize the proposed Intelligent Product, Workflow Manager
and Workflow Executor systems. Thus, MES can be decentralized to edge nodes as
envisioned by Industry 4.0.
vi
Contents
Part I 1
Chapter 1 – Introduction 3
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Problem formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Research Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
1.4 Thesis scope . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.5 Thesis structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
Chapter 4 – Contribution 53
4.1 Summary of Appended Publications . . . . . . . . . . . . . . . . . . . . . 53
References 63
vii
Part II 71
Paper A – A Survey of Commercial Frameworks for the Internet of
Things 73
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
2 What is an Internet of Things Framework . . . . . . . . . . . . . . . . . 76
3 Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4 Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
Paper B – Translation Error Handling for Multi-Protocol SOA
Systems 95
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
2 Background and Related Work . . . . . . . . . . . . . . . . . . . . . . . . 99
3 Problem Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
4 Proposed Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
5 Implementation and Results . . . . . . . . . . . . . . . . . . . . . . . . . 107
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
7 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
8 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
Paper C – IoT Interoperability - On-demand and low latency Trans-
parent Multi-protocol Translator 115
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
3 Arrowhead Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
4 Proposed SOA-based translator . . . . . . . . . . . . . . . . . . . . . . . 123
5 Translator Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
6 Transparency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
7 Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
9 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
Paper D – Orchestration of Arrowhead services using IEC 61499:
Distributed Automation Case Study 141
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
2 IEC 61499 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
3 Arrowhead . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
4 Case study: Conveyor Loop with Distributed Control . . . . . . . . . . . 147
5 Application of Arrowhead architecture . . . . . . . . . . . . . . . . . . . 147
6 Application integration in IEC 61499 . . . . . . . . . . . . . . . . . . . . 150
7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
viii
Paper E – Service Oriented Architecture Enabling The 4th Genera-
tion Of District Heating 155
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
2 The Frameworks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
3 Structure and Behavioral Models . . . . . . . . . . . . . . . . . . . . . . 161
4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
5 Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
Paper F – Protocol Interoperability of OPC UA in Service Oriented
Architectures 171
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
2 OPC Unified Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . 175
3 Proposed Translation Solution . . . . . . . . . . . . . . . . . . . . . . . . 176
4 Application Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
5 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
7 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
Paper G – In-network Processing for Context-Aware SOA-based Man-
ufacturing Systems 189
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
2 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
3 Proposed solution - Expression of Interest . . . . . . . . . . . . . . . . . 196
4 Proposed solution - Dynamic Service Provisioning . . . . . . . . . . . . . 197
5 Wheel loader ball bearing monitoring . . . . . . . . . . . . . . . . . . . . 200
6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202
Paper H – Workflow Management for Edge Driven Manufacturing
Systems 205
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
2 Related work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
3 Arrowhead Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . 210
4 Work flow handling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211
5 Edge Automation Services . . . . . . . . . . . . . . . . . . . . . . . . . . 213
6 Use case implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
7 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
8 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Paper I – System of Systems Composition based on Decentralized
Service Oriented Architecture 223
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
2 Challenges for Industrial Information Distribution . . . . . . . . . . . . . 228
3 Related Works and Technologies . . . . . . . . . . . . . . . . . . . . . . . 228
4 Proposed solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230
ix
5 A Graph analysis of Arrowhead Framework . . . . . . . . . . . . . . . . . 231
6 Building Systems of Systems using graph queries . . . . . . . . . . . . . . 235
7 Implementation of a System of System Composer . . . . . . . . . . . . . 239
8 Demonstration of proposed solution . . . . . . . . . . . . . . . . . . . . . 246
9 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248
Paper J – Software Architectural Style for the Industrial Internet
of Things 251
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
2 Software Architecture Practice . . . . . . . . . . . . . . . . . . . . . . . . 256
3 Advanced Architectural Styles . . . . . . . . . . . . . . . . . . . . . . . . 258
4 Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
5 Systems of Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263
6 Challenges of Software Engineering in IIoT . . . . . . . . . . . . . . . . . 266
7 Principles for Software Architecture design in IIoT . . . . . . . . . . . . 268
8 The Principled Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 271
9 The Architectural Style . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
10 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282
x
Acknowledgments
I would like to start by saying this has been an inspiring and rewarding experience.
I enjoyed researching into Industrial Internet of Things and working on cutting edge
technologies. I have been challenged to think and argue scientifically and in the process
extended my knowledge. My research has been motivating, exciting and satisfying, and
I appreciate the support and opportunities. I am thankful to all, even though not all are
mentioned below.
I enjoyed working with and would like to express my gratitude to my supervisors Jens
Eliasson and Jerker Delsing for their guidance and patience. I value the freedom given
to me in my research and the thought provoking, visionary discussions and feedback.
I appreciate the opportunity to work on large IoT projects, and to collaborate with
academic and industrial partners. I enjoyed the challenging debates, constructive criti-
cism and professional opinion.
I would like to thank Volvo Trucks team Frida Schildauer, Mattias Andersson and
Patrik Gustafsson for valuable feedback and discussions.
I had a great team at the EISLAB, and I enjoyed interesting discussions, opinions
and conversations.
My warm gratitude has my family for their support and patience.
xi
xii
Part I
1
2
Chapter 1
Introduction
1.1 Introduction
The world of computer systems, including software and hardware, are in a constant state
of evolution. Software is taking a growing proportion of the responsibility for overall
system functionality. It has become a crucial tool of almost all industries from bakeries
and arts to manufacturing facilities and banking. As systems gain more software aspects,
managing software complexity and heterogeneity becomes a key challenge [1]. In the late
80s and early 90s, it was already well recognized that software architecture is a key
to manage the complexity. This early work laid the foundation for the area of software
architecture [2]. It is the study of the form and structure of software abstractions intended
to satisfy a set of functional and non-functional requirements. The field of software
architecture has grown and been applied to many different computing challenges. It is
fully introduced in Chapter 2.
The Industrial Internet of Things (IIoT) is a logical consequence of widely present
computing ubiquity and interconnectedness. With simple beginnings in 1982, Carnegie
Melon University researchers connected a vending machine to the Internet allowing staff
to query the state of Coke in the machine (empty, warm, cold) before walking to buy
a soda [3]. In this sense, the vending machine was the first Internet of Things (IoT)
device. It was actively sensing its environment, storing and processing information and
had connectivity to its stakeholders. The current trend of IIoT (digitalization) seen
around the world is an evolution of this simple scenario.
Technological and business opportunities and challenges within IIoT have given rise
to a “zoo” of software solutions. In this context “zoo” implies variations of size, domain,
methods, forms (web app, smart app, libraries, tools), solutions and more.
3
4 Introduction
Computing domains in industry are commonly grouped into operational and informa-
tion technology (OT, IT). They are shaped by different requirements and environments:
one focused on control (automating mechanical processes), real-time requirements, relia-
bility and long life cycle etc.; the other focused on data management, process efficiency,
business process management and frequent updates etc. This impacted the development
of hardware, communications and software. The hardware platforms acquired different
properties. In OT domain, deterministic operation is built into the platform that is also
tightly integrated to the physical world (e.g. A2D/D2A and DIO ports). In IT domain,
platforms are high performance processors with large memory and storage. Similarly,
communication networks diverged in their properties. In OT domain, communication
commonly is deterministic, requires hard wired addressing and secured by dedicated or
restricted access. In IT domain, Internet stack is the prevailing communication stan-
dard, with global addressing scheme, high bandwidth links and cryptographic security.
Software development specialized to address demands of each domain. In OT domain,
software emphasises deterministic modelling abstractions, machine-to-machine interac-
tions and robustness. In IT domain, software realizes high level abstractions, reconfig-
urable functions and human oriented interfaces. This resulted in differences in software
properties, capabilities, scale, tools, technologies and competencies etc. i.e. architectural
approaches.
Software within the two domains are put together to achieve overall business goals.
As shown in Figure 1.1, the IT and OT have divided ISA 95 [4] pyramid and taken
responsibility for the different levels. OT is at the manufacturing control layer (level 1
and 2), responsible for sensing, manipulating, monitoring and controlling the production
processes etc. IT is at the manufacturing operations, business planning and logistics
layers (level 3 and 4), responsible for plant scheduling, inventory management, logistics,
workflow, recipe control etc.
Figure 1.1: ISA 95 pyramid with Information and Operational Technology domains shown.
Building on existing ISA 95 functions, the challenges of Industry 4.0 can be ap-
1.1. Introduction 5
1. Implement it in IT Shown in Figure 1.2, this will leverage powerful tools, methods,
processing power and architectural approaches. Communication, synchronization with
physical production process (level 0), determinism, real-time operations are the challenges
to be resolved. (An extreme case of this could be cloud based manufacturing with direct
access to production process).
Figure 1.2: Implementation of Industry 4.0 with Information Technology realizing ISA 95 func-
tions.
3. A fusion of IT and OT Shown in Figure 1.4, this will leverage both IT and OT
strengths. Such a fusion is an IIoT system that is capable to realize Industry 4.0 fea-
tures. It requires multidisciplinary teams and architectural vision. Such an architecture
has to accommodate heterogeneity of systems, capabilities, domains, competencies etc.
Creating such an architecture is the challenge.
The IT and OT domains are decomposed and rearranged together into autonomous
nodes. The system is highly distributed with horizontal integration. IIoT pushes away the
pure vertical integration and enables a holistic view of the software architecture on each
node. Enabling technologies and methods for this change are Edge [6] and Fog [7] com-
puting. The technological heterogeneity of such applications is another challenging issue
[8]. For example, within a single IIoT application several communication protocols can
be employed. This requires communication interoperability (as defined in [9]) solutions
6 Introduction
Figure 1.3: Implementation of Industry 4.0 with Operational Technology realizing ISA 95 func-
tions.
Figure 1.4: Implementation of Industry 4.0 with fusion of Information and Operational Tech-
nology. ISA 95 functions are realized as IIoT applications.
like translators [10] between different request/response and push/pull protocols such as
HTTP, CoAP, MQTT, AMQP, XMPP and OPC-UA. The cause of the communications
heterogeneity is from the mixture of computing platforms in the IIoT. It includes small
8-bit micro-controllers (MCU), 32-bit Digital Signal Processors (DSP), 32-bit or even
64-bit single board computers, general purpose PCs and high end server grade hardware.
This huge variation of hardware capability entails a large variation of communications
technologies, applications spaces and requirements. Therefore, the solutions will tailor
the communications according to the functional and non-functional requirements. This
means that multi-protocol solutions are inevitable. Existing approaches may utilize a
proxy (reverse, forward or injection) [11], a gateway [12], or something more advanced
such as an ESB [13]. All of these solutions face well known challenges of centralization
or pre-configuration. This increases infrastructure costs and reduces flexibility. IIoT
1.1. Introduction 7
introduces complexity on a scale previously unseen in the software industry. Software ar-
chitecture must accommodate these heterogeneous domains and competencies and handle
the increasing levels of complexity.
What is software architecture and how does it impact implementation design? A
brief introduction is made here and a more detailed discussion is presented in Chapter 2.
Unconstrained software can satisfy most user requirements in an almost unlimited number
of ways [14]. Software is made through a series of design decisions until implementation in
code. Each design decision made adds some constraint to the solution and each constraint
cascades to downstream design decisions. Early decisions, such as architectural, are
fundamental; they have a higher and longer lasting impact on the overall solution. Hence,
software architecture should only constrain critical (load bearing) parts of the solution
[2]. Thereby allowing as much flexibility for solution variation as possible. This practice
is in-line with development methods such as Agile [15].
Software design has major implications on migration toward Industry 4.0 [16]. When
designing new solutions legacy systems, infrastructure and procedures must be taken into
account. For industrial automation this is acute because some equipment life-cycles can
span well over a decade. Software architecture must be designed to accommodate legacy
technology. On the other hand, software architecture must also minimize constraints
on how future (unknown) technologies can be integrated. Therefore, the software ar-
chitecture will evolve while the components may have limited ability to change. As the
architecture adapts for changing requirements and technologies, its architectural style
acts as a anchor through this evolution. An architectural style is the language or man-
ner in which the concrete architecture is described. It utilizes abstract concepts that
communicate intent and rationale of the software architecture. What would a suitable
architectural style be for IIoT engineering?
Architectural styles such as Service Oriented Architecture (SOA) [17] and Multi-
Agent Systems (MAS) [18] can be used to engineer IIoT applications. Research by
Colombo et al. in [8] reported the use of SOA as a promising architectural style for
cloud based process monitoring and control. IIoT frameworks and platforms such as the
Arrowhead Framework [19], the Far-Edge [20] and Axcioma [21] utilize SOA, MAS or
other component based architectures. Systems and Systems of Systems (SoS) theory was
introduced in the Arrowhead project [22] to supplement SOA abstractions. Introduction
of new concepts, such as local automation clouds, may be required [23] to enable the
merging of IT and OT abstractions and architectures.
The Reference Architectural Model for Industry (RAMI) 4.0 [24] also creates new
opportunity to rethink the principles and styles that make up software architecture in
the fourth industrial revolution [25]. Proposals such as the RAMI 4.0 attempt to allow
more flexible integration between Product Life-Cycle Management (PLM)/Enterprise
Resource Planning (ERP) and the shop floor. Traditional vertical integration through a
layered architecture introduces overhead. This may be overcome as advancements in edge
computing enables more functions, such as Manufacturing Execution System (MES), to
exist in lower layers of the architecture. This has blurred the boarders between IT and
OT domains, such that physical and logical entities contain functionality of both IT and
8 Introduction
OT domains. These functions then cannot rely on a pure layered abstraction to separate
their concerns. For example, MES functions of tracking and execution may run on edge
computers [26] that are also running process control algorithms.
Decentralizing MES functions such as workflow management creates challenges like
logic and information fragmentation and others. Workflow management is a traditional
research area in industrial automation and production [27]. The fourth industrial rev-
olution has pushed research of IIoT enabled workflow management [28, 29] and supply
chain [30]. Industry has also seen the potential of IIoT in supply chain management
[31, 32, 33, 34]. A workflow (or business process), is the combination of activities and
information toward the creation of new value [35]. Workflow patterns are split into four
broad categories; Control, Resource, Data and Exception Handling workflows. Web2.0
architectures such as REST and hypermedia have been proposed for data driven workflow
automation [36].
There are many challenges in IIoT engineering, among them: delegation of higher level
of ISA 95 functions into factory edge computing; mechanisms for composition and co-
ordination; interoperability across a plethora of protocols; information sharing between
machines; engineering of IIoT applications (architecture); and more.
This thesis addresses the challenges of communications interoperability, information
extraction, workflow management and IIoT software architecture design.
application domain, embedded systems do not have such rich architecrual repertiore
as enterprise systems above. As embedded and OT systems become networked and
participating in IIoT the architectures must mature.
The research questions posed, led the author from practical interoperability, towards
information sharing in industrial distributed systems and workflow management. The
questions are as follows:
Q1 What possibilities and limitations are there for efficient and interoperable multi-
protocol IoT networking?
The area of IoT has evolved to incorporate a plethora of messaging protocol standards,
both existing and new, emerging as preferred communications means. It appears unlikely
that IoT applications will come to consensus on a single standard protocol. The variety of
protocols and technologies enable IoT to be used in many application scenarios. However,
this creates vertical silos and reduces interoperability between vendors and technology
platform providers. There are existing approaches to tackle the interoperability challenge:
Protocol middleware tends to move the interoperability problem rather than solving it
and impacts application design; and protocol proxies have scalability issues, requiring
configuration effort and introduce processing overheads. The challenge here is to make
an interoperability solution that supports transparent translation, is scalable, has secured
interfaces, can perform error reporting and conducts QoS monitoring and control.
Q2 What are the possibilities for ancillary information extraction within SOA-based
heterogeneous networks?
Communications within a production line system will often be limited to only op-
erational communications required to complete the work. This is to avoid operational
performance degradation through network or data access congestion. But this means that
non-operational data access, such as Key Performance Indicator (KPI) data, is either re-
stricted or prohibited all together. KPI are critical to understanding and optimizing the
production line processes. Domain engineers today complain of the need to utilize soft-
ware support/engineering in order to change KPI generation. This question asks if it is
possible to enable domain engineers at the edge to extract non-operational information.
In answering these research questions, it became clear that the design of the overall
system is the key toward interoperability, information extraction and functional distri-
bution. This leads to the next question.
Q4 What are the underlying architectural principles and styles that can guide design of
Industrial Internet of Things applications?
work. Part II consists of ten appended papers that contain the core research contribu-
tions of this thesis. All appended papers have been published, accepted or submitted
for publication in peer-reviewed scientific journals or conferences. The appended papers
have been reformatted to match the layout of this thesis without changing their contents.
Part I is divided into five chapters. Chapter 2 introduces background technologies,
methods and domains and gives overview of theoretical foundations for software archi-
tecture. In Chapter 3 the research questions are explored, proposed approaches are pre-
sented and prototyped systems are described. Software architecture style is presented,
discussing proposed architectural principles and views. Chapter 4 summarizes the re-
search contributions of the appended papers to this thesis. Chapter 5 draws conclusions
and presents and outlook for future work.
Chapter 2
Software, Systems and the IIoT
In this chapter, the background technologies and domains are introduced. Relevant
concepts are defined as how they are used and what they mean in the context of this
thesis
13
14 Software, Systems and the IIoT
used. The traditional security triad of confidentiality, integrity and availability still apply
to the IIoT, and now privacy must also be included. Traditionally, industrial computer
networks rely on network segregation with highly controlled network access [55] or an “air
gap” between factory floor and IT networks [56]. This includes using firewalls to control
what connections are allowed to pass between network segments, for example network
traffic entering and leaving the factory may be fully denied. An IoT application may be
deployed through a public app store, with terms and conditions often unread. However,
IIoT applications have monetary consequences and could expose commercially sensitive
information. Authenticity and terms and conditions must be taken seriously. Hence,
mash-up or app store approaches must provide means of allowing commercial certainty,
that security vulnerabilities will be not exposed.
IIoT applications operating in continuous production require QoS agreements and
monitoring. QoS refers to the non-functional requirements of an application. The QoS
concerns could be battery life time, bandwidth, round trip delay, redundancy, backup,
resilience, recovery or more. QoS is an important issue for IIoT therefore it must comply
stricter limits. Whereas IoT has relaxed QoS concerns, to the degree they may not exist.
The life-cycles of IIoT applications differ to those of IoT. IoT application life-cycles
have less verification steps and a more rapid turn over, with faster adoption of new
technologies. By comparison IIoT applications must pass thorough testing, simulation,
validation and verification prior to deployment. The deployment environment of IIoT ap-
plications requires integration with areas, such as legacy systems and devices, simulators,
intelligent robotics, big data, analytics and augmented reality etc. In addition, IIoT must
not introduce cyber security vulnerabilities to other areas, such as robotics. Therefore,
a software architecture style must account not only for IIoT but also the surrounding
domains. The next section introduces Industry 4.0 and some of the fields surrounding
IIoT.
Figure 2.1: The RAMI 4.0 cube. With the vertical axis representing software concerns, the
horizontal axis representing life cycle stages, and the diagonal axis represents automation hier-
archy.
The RAMI 4.0 cube has some interesting properties for software architecture. The
three axis of RAMI 4.0 highlight the multi-dimensional nature of Industry 4.0 and IIoT.
From the vertical axis, the software layers are separated across the automation hierarchy.
Whereas, from the diagonal axis, the automation hierarchy levels contain each of the
software layers. Before diving deeper into the implications of the RAMI 4.0 on software
architecture, the next section describes software architecture practice and origins.
is also used as a framework for technical and managerial decision making [2]. The archi-
tecture is not only used for design and implementation decisions, but also cost estimation,
requirements validation and design analysis. Software architecture is a fairly recent re-
search area with fundamental scientific papers published in the late 80’s and early 90’s.
As computing technology (hardware and networking), software practices and user ap-
plication spaces have evolved, so too have architectural practices. Recent research into
software architecture has focused on blue prints and patterns [38]. These reference archi-
tectures specify component entities and methods to tackle spatial distribution, streaming
and discreet information flow, and close coupling to physical world.
At its essence software architecture follows the analogy of construction architecture
(as in civil engineering). Perry and Wolf [2] introduce it as the presentation of different
views for the purpose of communication. Similarly, construction projects have structural,
plumbing, electrical and aesthetics views. These views highlight key aspects (i.e. load
bearing walls) of the design and are utilized by different stakeholders in the validation
and implementation of the project. Perry and Wolf [2] propose a three view approach;
process, data, and connector views. The process view describes the operational con-
struction of transformation activities, the data view describes the information and how
it changes during the life of the application and finally, the connector view describes the
interconnection between transformation activities.
Capturing early design decisions and rational guides the overall architectural style. An
overall architectural style is established dependant on how the architecture has described
these early design decisions and rationale. The concept of style again draws a parallel in
the construction world. For example, an architect may suggest the home design to follow
a Victorian style. This constrains and guides the subsequent design decisions, such as
flooring, lighting selections, may etc. Once a style has been built into the home, it will
influence future development or extension to the home. Similarly, once an architectural
style has been selected for an IIoT application it will influence future development or
extension of the application.
In a relatively short period of time, architectural concepts matured. Kruchten intro-
duced the 4+1 view model of software architecture [59]. This model was later adopted
by the Rational Unified Process (RUP). As shown in Figure 2.2, the ”4+1” views are
logical, development, process and physical view, plus a scenario view that extends across
each of the former views.
The logical view is an entity model of the design. For example this could be an object
based decomposition. Here, application data and information models could be captured,
however, because the focus is on end-users or domain engineers, it may lack the needed
depth for software development. The development view is a detailed break down of the
structural aspects of the organization of the code. This assists with division of work
between developers and dependency matching of development resources. The process
view is an execution view of the design, looking into concurrency and synchronization
aspects, capturing their transformations and relationships. The physical view shows
the distribution of the software over the executing hardware. Capturing certain non-
functional aspects of the design and guiding the deployment requirements and planning.
18 Software, Systems and the IIoT
Figure 2.2: The 4+1 architecture with views mapped to the RAMI 4.0 software layers. The
business, communication and integration views are not represented in this architecture. On the
other hand, RAMI 4.0 does not have a development view.
Interestingly, Krutchen did not utilize separate connection view and data view as
Perry and Wolf did. The 4+1 architecture does however pin-point that each view targets
a different audience, this is an important element of architecture practice. Users are
interested in functionality presented in the logical view. Developers would split workload
and design code repositories based on the development view. Technical and business
integrators tune performance and scalability using the process view. System engineers
plan topology, communications and deployments based on the physical view. Kruchten
further elaborates that each view has its own set of elements, patterns, rationale and
constraints. Within each view different architectural styles can be used. The 4+1 style is
a meta style that allows freedom to adapt the architecture to a minimal set of constraints
for the intended usage. Scenarios are used as a shared validation vision - to make sure
that requirements are satisfied by each of the architectural views.
The World Wide Web (WWW) which morphed into a key technology over the 90’s
lacked a clear definition of its architectural style. Without a clear style, changes to the
WWW architecture lacked consistent stylistic principle and were be based on short term
solutions to immediate issues. In his dissertation, “Architectural Styles and the Design
of Network-based Software Architectures” [60], Fielding defines an architectural style
for the WWW and web application development. This study addressed this issue and
provided guidance on adapting and evolving WWW protocols and techniques. The the-
sis introduces software architecture as an abstraction of run-time elements. It describes
that an architecture is not only a static structural description of a system, but rather a
description of a dynamic running system. The resulting Representational State Transfer
(REST) architectural style has become a common approach to web application develop-
ment. REST combines a number of architectural styles, including Client-stateless-Server
with caching and proxying; layered; mobile code; and uniform interface. Each style uti-
2.4. Fundamental Architectural Styles 19
lized by REST introduces a constraint to the freedom of design decisions, thus, providing
a foundation for the rationale of the design. The REST style definition can be used to de-
termine which application architectures conform to the web and those that do not. Thus,
a web application that is developed by several different teams, maintains architectural
continuity and avoids embrittlement.
Before discussing advanced architectural styles, the next section provides a high level
overview of some fundamental ones. These styles are often used as a basis for creating
or as part of the advanced architectural styles.
2.4.4 Components
The Component architectural style is an intuitive grouping together of logic and func-
tionality. Components can be composed into applications because they use standard
interfaces. A component is self contained logic that can be compiled into a modular
library, this is the main difference to object oriented design. Hence a component may
contain many objects. This style is very flexible and has been used to a great extent for
automation and embedded systems engineering.
2.4.5 Repositories
Repositories are a style used for data driven applications. Often used where shared
access to memory is required or where intelligent software are interdependent through
information. For example, a black-board architecture can be used for a compiler with
multiple concurrent processes handling the multi-step transformation. Repositories are
often used with the n-tier style for web applications.
2.4.7 Interpretor
The interpretor style has been used for programming language development. The Python
Virtual Machine (VM) or the Javascript engine are examples of common interpretor based
solutions. The Java VM uses a compiled byte code, this makes Java a hybrid language.
First the Java code is compiled to byte code, then the byte code is interpreted by the VM.
The architecture is flexible to changing command sets or for providing a full abstraction
over hardware/firmware changes. The interpretor style is used for many Domain Specific
Languages (DSL) and scripting languages. Underlying methods, functions and compo-
nents can be mapped to languages that are suitable for stakeholders as an application is
moved to new usage domains.
1. Standardised service contract Are used for services to express their capabilities
and method of interaction.
4. Service reusability Requires that logic is broken down into agnostic functionality,
which can be reused within different contexts. This principle is dependent on the service
modelling approach used.
5. Service autonomy Means that service should control their own logic and environ-
ment. It is essential for reliable and consistent behaviour. The scope of the autonomy is
dependent on the service capability.
2.5.2 Microservices
The microservices architectural style is an evolution of SOA. It is generated through
experience of architects and engineers overcoming the pitfalls in the SOA style. The
argument for moving from monolithic applications to a Microservices application are the
same as for moving to SOA. As stated by Newman in [64], Microservices are software
applications with a single purpose. Newman goes on to argue that SOA lacks clarity
on service design. In particular guidance on service size and application decomposition
are two areas of ambiguity. In addition, Microservices also raise the notion of smart
endpoints and “dumb” pipes [64]. This is in response to heavy middleware such as ESB
that are more intelligent than the services.
2.5. Advanced Architectural Styles 23
1. Web as a platform
2. Harnessing Collective Intelligence
3. Data as the next Intel Inside
4. End of software release cycle-the ”perpetual beta”
5. Lightweight programming models
6. Software Above the level of a Single Device
7. Rich user experience
These principles are in many ways reflect a smaller set of changes significant to soft-
ware development. The web as a platform directs software vendors away from building
applications as platforms for locking consumers in with. Harnessing collective intelligence
is acknowledging the success of utilizing user input and participation over publishing. The
Web 2.0 users produce huge amounts of data. Data driven applications then are sources
of revenue. The perpetual beta is linked to the Web as a Platform as software appli-
cations are on-going services rather than packaged artefacts. Lightweight programming
models boost the perpetual beta through quick development/release cycles. Software
24 Software, Systems and the IIoT
built with a single computer in mind does not take advantage of the Web as a Platform
and perpetual beta. A Rich user experience is defining, if user collaboration is desired.
Additionally, user experience becomes the user lock-in and service differentiation.
These principles lead to 8 design patterns for the Web 2.0:
1. The Long Tail
2. Data is the Next Intel Inside
3. Users Add Value
4. Network Effects by Default
5. Some Rights Reserved
6. The Perpetual Beta
7. Cooperate, Don’t Control
8. Software Above the Level of a Single Device
As can be noted, some of these design patterns are directly brought from the princi-
ples. Showing that the principles must first be elaborated before patterns can be defined.
The Long Tail means to not focus only on the central applications of the Web, but on the
users of the web. Target applications to the larger masses on the web. Network effects
by default promotes making privacy settings open by default. Web users must opt-out of
the data sharing. Some rights reserved is indicating the need to make applications easy
to adopt. The pattern Cooperate, Don’t control, means to utilize and build on shared
web services and re-use of data services.
Some of these principles and design patterns are highly relevant to web software
engineers, whilst others are specific to the Web. The future of software development
is bringing about a loosely coupled set of software applications that can, each, be in
perpetual beta, whilst being developed by ad-hoc combinations of software developers,
in a cooperative manner.
2.6 Systems
As defined by the US Military DoD JP 1 02 a System is:
“A functionally, physically, and/or behaviorally related group of regularly
interacting or interdependent elements; that group of elements forming a
unified whole”[76]
While ISO/IEC 42010 defines a system as:
“A collection of components organized to accomplish a specific function
or set of functions”[77]
26 Software, Systems and the IIoT
These two definitions provide a basis for understanding what makes up a system. In
this thesis, a system is a key building block for understanding an architecture. Here we
say, a system captures a fully autonomous entity with an independent set of functions,
life-cycle and purpose (objective). A system may or may not communicate and collab-
orate with other systems. However, in our case, many of the systems will need to work
collaboratively in order to achieve their own goals. Systems thinking means that when
designing a system, all aspects of the life of the system must be considered. This means
that all software and hardware aspects; such as logical, functional, structural security,
mechanical housing, spatial location, electronic and human interaction are captured un-
der a single entity. Spatial and temporal boundaries have been the intuitive delineation
lines between systems. However, as the definitions above indicate, functional boundaries
must also be considered. Add to this, managerial (change/configuration management),
political and economic boundaries, will capture strong lines of separation between sys-
tems.
1. Stable Intermediate Forms While designing SoS, incomplete forms of the SoS
should be designed and put into action. As stated, an SoS is made up of component
systems. Utilizing a sub-set of these component systems to create a partially functioning
but stable SoS will act as a proof of concept and a platform for early learning. This
principle matches very well with concepts of Agile development and iterative design.
2.7. Systems of Systems 27
2. Policy Triage The SoS engineering team does not have control over the system
development or modes of operation. They must triage the situation and choose carefully
where to exert influence on systems teams and where/when to adapt the SoS to the system
choice. This is captured as over-control - leading to failure due to lack of authority, or
under-control - leading to failure due to elimination of an integrated SoS.
3. Leverage at the Interfaces The component systems of an SoS are often designed
and managed independently of each other. Furthermore, the SoS designers have limited
influence over internal architecture of the systems. This leads to SoS design having a much
greater emphasis on interfaces between systems rather than the design of component
systems.
1. Virtual Virtual SoS lack centralized management and purpose. Behaviour is not
created with intention, rather it emerges from the resulting SoS. The Internet is an
example of a virtual SoS; There is not enforcement of Internet standards and there is no
centrally agreed reason for the Internet. The Internet Engineering Task Force (IETF)
[81] must utilize principle 4 ensuring cooperation to create the Internet standards.
2. Voluntary Voluntary SoS can also be referred to Collaborative SoS. Involve cen-
trally agreed purpose of an SoS, but do not subscribe required adherence to the SoS. SoS
management does not have any coercive power on individual systems. This is perhaps
the most challenging and common form of SoS within the context of IIoT.
3. Directed In the case of directed SoS, purpose and management is fully centralized
and has significant control over component systems. The total SoS design lends itself
closely to Monolithic assumptions regarding behaviour control.
28 Software, Systems and the IIoT
2.8 Summary
In this section the Industrial Internet of Things and some software architecture ap-
proaches were discussed. Architectural styles like SOA, MAS, Web2.0 and microservices
all have contributing principles toward a software design and development for the IIoT.
However, there are certain challenges of IIoT that can be handled by one style and not the
other. Pure SOA usage in a physical environment lacks the descriptive need for modelling
physical entities with multiple, inter-dependent services. Pure MAS has high resource
demands from nodes that are constrained in battery life and communications capability.
Web2.0 focuses predominantly on web based commerce and interaction. There is little
attention to low bandwidth or niche requirements of industrial use cases. The principles
of Microservices build up many advantages to SOA and can be useful to IIoT. They
however, lack the same expressiveness required for understanding the physical nature of
dependent services in IIoT nodes.
In the next chapter, a consistent argument for a new architectural style is presented,
based on defining a set of principle that are able to cope with the challenges of IIoT
software architecture design. SoS is used along with SOA as the basis for modelling
interacting IIoT nodes. Functionality is spread amongst the nodes and applications
emerge from the resulting SoS. SoS is a flexible notion that lends itself well to environment
with highly varying and heterogeneous capabilities. For example, a company or society
can be modelled as a SoS with humans making up the component systems. An employee
will have a set of personal objectives and capabilities, for example, objective of earning
an income and capability of developing software. Employees in a company work in a
collaborative centrally administered organisation. Products and services are the result
of the emergent value of the collaborating employees.
Chapter 3
Architecting IIoT Software
In this chapter the novel solutions to the research problems are elaborated. The first
section presents the communications protocol interoperability solution and its properties.
In section two, the information extraction design and solution is presented. Section
three describes the decentralized MES solution for workflow execution, tracking and
management. The fourth section provides a detailed proposal for guiding architecture
development for IIoT applications.
29
30 Architecting IIoT Software
Figure 3.1: The internal architecture of the proposed protocol translator. The upper persistent
system will provision dynamic instances as required. Each instance have a custom protocol
pairing and service configuration.
ing the request for protocol bridging. The hub shall monitor the bridge for breach of
QoS agreement and raise error warnings if needed. Each protocol spoke shall be pro-
tected against malicious users through access control policy enforcement. The protocol
spokes are responsible for all protocol and service related behaviours. Such as, active
polling/publishing when bridging from a request/response provider pattern to a pub-
lish/subscribe consumer pattern.
The multi-protocol translator is an important component of the Arrowhead Frame-
work [19]. It constitutes a core support system providing interopability and thus seamless
operation of IIoT applications.
is described using the proposed graph taxonomy. A graph model based on this taxon-
omy can be analysed/queried to retrieve functional, communication and security related
edges and nodes. The nodes in the graph taxonomy are: devices, systems, service types
and interfaces, objects, users, attributes and operations. The edges in the graph taxon-
omy are: hosted by, provided by, requires, offered by, supports, implemented by, aliases,
represents and defined.
The graph model is queried first for functional path between data source and the
data sink. Figure 3.2-a shows the resulting functional bipartite graph. If the functional
graph is connected, then the communications graph can be queried. Using functional
nodes as anchors, the graph model is queried for interface nodes. Figure 3.2-b shows the
resulting communications bipartite graph. If the communications graph is connected,
Figure 3.2: System composer graph query results. a) functional graph, b) communication graph
then the security graph can be queried. Using the communications nodes as anchors,
the graph model is queried for user, object and operation nodes. These nodes must be
connected through attribute nodes that satisfy the access control policy. If the policy
has been satisfied, then the communications bipartite graph can be used to form the SoS
specification (system interconnection rules).
As shown in Figure 3.3 the system composer interacts with external registries and
information stores to acquire data regarding the active environment. The data retrieved
is processed by a graph updater using the proposed SoS graph taxonomy. The updater
pushes the new graph data to the graphing module. This recurring process is either
periodic or event driven depending on service interface design of the external systems.
The graphing module uses a graph database (in this case a Neo4J) to store and query the
graph model. The graphing module receives requests from a query parser. These requests
are graph queries and could be SPARQL or Open Cypher requests. The graphing module
runs the received query across the graph model held in the database and returns the
result. The query parser processes the results from the graphing module and builds a
set of interconnection (orchestration) rules. These interconnection rules make up the
SoS specification and this is sent to an external SoS manager, such as the Arrowhead
Orchestrator. In parallel the query parser will have responded to the application system
32 Architecting IIoT Software
that originated the information extraction query. This application system will have
originally made its request through the information extraction service provided by the
system composer.
System composer was prototyped and applied to wheel loader and Nutrunner case
studies. It has been proposed to be included to the Arrowhead Framework as a core
support system.
Figure 3.4: The proposed MES workflow management solution and the interactions between its
constituent systems.
The proposed workflow manager implements the activity diagram in Figure 3.5. When
a new product arrives at the workstation, the workflow manager retrieves the product ID.
The workflow manager queries the intelligent product for the local production specifica-
tion (workstation related information). The workflow manager filters the local workflow
and provides an operation list to the workflow executor. The workflow manager receives
the operation report from the workflow executor and sends this back to the intelligent
product. The product then leaves the workstation.
to perform to produce the given product. Workflow executor delegates jobs to workstation
equipment as specified in the state machine. It utilizes its SoS to carry out the workflow
state machine.
The workflow executor is a software system running on edge computing. It does
not interface with services outside the local workstation scope. Its integration with
(legacy) equipment interfaces means that it may have customization dependant on the
workstation.. The workflow executor block diagram is shown in Figure 3.6. It illustrates
the dynamic interfaces to equipment and operation step state machine engine. The
equipment interfaces grow and shrink based on the interconnection rules returned from
the SoS manager (Arrowhead Orchestrator).
For example, in [82], a plant description system is used to engineer the workstation
prior to any workflow execution. The workflow and Arrowhead orchestration are closely
related. There are still design questions on how to best delineate functions of both.
These questions are ongoing research and discussion within both the Productive 4.0 and
Far-Edge project consortium.
Proposed systems have been prototyped and are part of the autonomous workstation
pilot at Volvo Trucks facility.
stream systems could be interrupted. System life-cycle is another issue with system inter-
dependence: Change management that requires synchronization amongst interdependent
systems increases deployment effort (cost of change).
4. Data modelling The intersection of different domains exacerbates the data mod-
elling issue for IIoT. Architects may need to utilize semantic web with highly typed
representations of data, or plain text /binary data over a low power wireless link. With
evolving techniques and technology this challenge is always moving. This challenge is in
semantics of data, usage of data, storage of data and serialization of data.
6. User interactions Lines of separation between computers and humans are not
clear cut. User interact with machines with augmented reality, robotics and etc. So-
lutions cannot target a single display and information must flow between augmented
reality and traditional displays seamlessly. Virtual commissioning and human-machine
co-working also require fluid movement of information.
9. Data security A big challenge with so much networked data being made avail-
able and with increasing operational dependence on access to this data. Also, due to
networks of multi-vendor applications open traditionally secure environments to sloppy
3.5. Principles for Software Architecture design in IIoT 37
development or malicious code. The lines of attack for social engineering increases. This
challenge deals with: access control, forged credentials, social engineering and networks
of multi-vendor applications
12. Development practices OT engineers are accustomed to high cost well planned
stage-gate projects. While, IT developers prefer iterative time-boxed projects. Although
there still remain differences between IT and OT, the software life-cycle is still becoming
shorter.
IIoT solutions must support evolutionary development and have fast mean time to
failure. Life-cycles of IIoT systems can vary greatly. At any given time, systems will
be at different stages in their life-cycle. The challenge for architects is to delineate
between systems so that sporadic disruptions do not interrupt overall operation of the
IIoT application.
The challenges listed here are not application requirements. They are general issues
that architects must be aware of and will influence architectural considerations even
before requirements are considered. In order to address these challenges from a software
architecture perspective, first a set of architectural principles must be defined.
specify component instances, an pattern will have instances of components that will re-
solve a specific problem or set of problems. For example, Bell documented common SOA
patterns in [84]. Each concrete architecture should be able to track its reasoning through
the layers up to an architectural principle.
Figure 3.7: Software architecture abstraction layers. Small changes in the principles at the core
echoes through all the layers.
1. Loose coupling (Adapted from SOA) Firstly, each decomposition element is inde-
pendent of the internal design of other elements. Meaning that, for example, logic, intel-
ligence, implementation technology or in-fact (internal) composition can change without
impacting other elements. This helps to improve maintainability and changeability of
the elements.
5. Specialization This principle suggests that each element should have a limited
number of concerns. The decomposition elements can be responsible for varying amounts
of functional or business logic. An extreme of this principle would be as with microservices
and limit each element to a single responsibility. This principle must be balanced with
other principles such as autonomy.
6. Data at its source This principle guides the architecture to avoid centralized
data stores where possible. Data is a commodity with value. Access must be controlled
and in some cases charged. This principle undertakes to direct privacy by design as a
characteristic of the architecture prior to any development. There are mechanisms to
allow cache and proxy, whilst allowing data source origin having certain controls over the
data.
Architectures such as SOA and MAS address decomposition aspects outside the scope
of the style. SOA uses architecture patterns [85] to guide application decomposition. In
MAS, design methodologies are used to guide the decomposition process, for example,
system oriented methodologies, such as Tropos [86] and Prometheus [87], propose agent
task and role decomposition. The approach proposed here, guides architects on how to
apply the principles to decomposition in a generic manner. Therefore without introducing
specific patterns, architects can begin to shape the overall architecture. To this end, it is
assumed that the requirements of the IIoT solution have been gathered and are presented
as a set of feature requirements. These features document both functional and non-
functional requirements, although they are more focused on functional requirements.
3.6. The Principled Decomposition 41
These features are used by the architect to decompose the application/solution. The
objective of the architect is to map these features into elements, connectors and modules.
Along with the rationale, these architectural components will be presented in different
views, dependant on the intended target stakeholders. Each of these steps should be
recorded and kept.
Autonomy A high level solution can be modelled as a black box with all capabilities
required to satisfy the requirements - with no detail on how it achieves this. This is insuf-
ficient and a developer will come up with an architecture based on limited information.
Studying the feature life-cycle, usage and spatial separations, this black box is broken
down into groups of capabilities. This first step of functional decomposition resulted in
a number of black boxes representing groups of capabilities that have similar demands
of autonomy. The next step is to look into specialization.
Specialization The architecture is now black boxes decomposed based on; life-cycle,
spatial distribution and decision making autonomy. Each black box of capabilities could
have quite different objectives or implementation requirements (i.e. legacy equipment).
Through each block box, split the capabilities based on the level of relatedness. Creating
specialization is at the architects discretion, and there is no definite way to determine
relatedness. However, where multiple processes are involved, the lines may be drawn
based on equipment scope boundaries. This will result in a higher granularity of black
boxes that are more meaningful to developers.
Data at its source The black box groupings of capabilities can now be related by data
usage and storage. The architect should guide (constrain) the solution to utilize data at
its source. This is done by introducing direct(ed) relationships between the capability
black boxes. Where a it is not possible to maintain data at the source, then additional
(non-functional) cache/proxy/archive capabilities should be added. The architecture now
has hierarchical groupings of capabilities decomposed by autonomy and specialization and
connected through directed data relationships.
First person perspective The black boxes can now be viewed as systems. Each
system has a set of capabilities that it is responsible to fulfil. By putting themselves into
the perspective of the system the architect questions how changes in the environment may
impact capabilities. Looking outward at other systems, what are my functional and data
dependencies? Do I have any fail safe contingencies or are there any mitigating factors
for dependency failure or loss? How will others make use of the capabilities I am offering?
Looking inward, what is my life cycle? What mechanisms do I have for optimizing my
life time? How can I trust other systems and how can I prove who I am? What are
my security vulnerabilities or strengths? By applying the first person perspective, the
architect will have captured capabilities as concrete service interfaces. This principle
has the added benefit that an architecture that rationalizes such considerations can be
utilized by engineering teams to identify and better guide implementation technology
42 Architecting IIoT Software
choices. In addition, this step must address the security constraints, keeping data safe
and safeguarding against malicious interactions.
client-server and pipes and filters. Depending on the specific application demands, the
proposed architecture style can also be combined with other architectural styles.
Figure 3.9: The proposed architectural style for designing IIoT applications.
An on-going example will be used to illustrate the proposed architectural style. The
example is from a pilot use case of the Far-Edge H2020 project. It involves two primary
cases; plug and produce and autonomous work station. For a high level introduction to
the use case: 1) When moving equipment between workstations there is need to create
automated configuration of the equipment and work stations. Such that domain specialist
resources can make factory floor changes with out assistance from IT resources. 2) There
is a need to make the work station able to operate with little or no factory wide IT
infrastructure. It is autonomous and decoupled from factory wide IT applications such
as MES.
3.7.1 Views
The Figure 3.9 illustrates the views as they reflect the interests and concerns of all stake-
holders. A view must capture important design decisions relevant for the stakeholders
(including the architect). Views are a critical method of creating focus on relevant
44 Architecting IIoT Software
concerns. The RAMI 4.0 layers define 6 areas of concern; business, functional, data,
communication, integration and asset. Roughly speaking, IT has focused on the upper 3
layers. This is in contrast to OT that focuses on the lower 3 or 4 layers, often providing
a thin service layer for interaction with IT infrastructure. Industrial middleware [88] has
positioned itself somewhere within the middle 4 layers, generally abstracting the physical
assets and remaining flexible to business processes. Suggested here, is to capture the
different RAMI 4.0 layers as application network views. Each layer captures the point
of view distribution across relevant RAMI4.0 hierarchical components. The views in this
architecture style can be as simple as a catalogue of concerns.
As already mentioned, systems and SoS make up the core modelling elements. The
concept of systems map well to capturing the concerns of each view. IIoT elements
are modelled as systems, such that, physical and logical independence is provided for
each element. Battery, processor, RAM, flash, networking (mesh vs tree), actuation and
sensing, are at the discretion of the domain architects and developers of each individual
system. This provides the architects of each IIoT system with the capability requirement
from business, function, information, communication, integration and asset concerns.
Business view
The business view captures concerns regarding business value
generation, regulatory compliance and domain or organization
norms.
The business view must capture the networked distribution of domain logic across the
IIoT device hierarchy. This requires a catalogue of requirements that capture value gener-
ating activities, regulation compliance activities and any organizational norms. For each
requirement a matching capability must be identified so that a set of business capabilities
are defined that will full fill all business requirements. Finally, the business capabilities
should be mapped to IIoT device hierarchy groupings. At this stage there is no direct
mapping to systems, only to the system classes (i.e. product, field device, work centre,
enterprise and etc.). This means that requirements and capabilities at the business layer
are visually allocated a specific layer of the IIoT hierarchy. The decomposition of the
capabilities with the subsequent mapping to hierarchy have applied the decomposition
technique defined in Section 3.6. This means that visualization and decomposition of
business logic leads to understanding infrastructure requirements and functional require-
ments. Thus it is the starting point for the functional view, which must support the
business operations and objectives.
During creation of this view/form, the architect must ask themselves:
1. What are the business considerations each component group are responsible for?
2. What are the dependencies between constraints (therefore dependency between
groups)?
3. Are there any direct and indirect impact of the constraints on the overall design of
each group?
3.7. The Architectural Style 45
For example, a regulatory requirement may require execution of certain business op-
erations. Figure 3.10 captures the capability mapping required to provide regulatory
compliance to safety nut tightening. That is, the safety nut, controlling the wheel align-
ment, must be tightened to a specific torque and follow through angle. This value must
be recorded and kept on record by the truck manufacturer. This view then shows how the
responsibility for performing the logical operations are distributed amongst the differing
hierarchical components to achieve the objective. With this view, the architect is able
to validate the business rule for wheel alignment safety nut tightening.
Figure 3.10: The business view is made up of simple black boxes representing each layer of
the IIoT device hierarchy with the allocated business capabilities decomposed from the business
concerns.
Functional view
The functional view captures concerns regarding system identi-
fication and capability assignment.
The business view captured the capabilities required to deliver value, stay within
regulations and any organizational norms. It has also allocated those capabilities to
the IIoT device hierarchy. Next the business capabilities are turned into functional
requirements. Domain knowledge is required to turn business capabilities into functional
requirements. In addition user requirements, such as interaction requirements, must
be captured. These requirements are all captured in a catalogue that will enable the
business capabilities and user interaction needs. For each catalogued requirement, a
functional capability must be defined, such that it satisfies the business capability at the
required IIoT device hierarchy level. The functional capabilities must be decomposed
according to the method in Section 3.6. Decomposition of the functional capabilities will
result in capability groupings that can be referred to as systems. Hence, the component
systems of the architecture have been identified, mapped to the IIoT device hierarchy
and their capability offerings/dependencies defined. The functional view has captured
46 Architecting IIoT Software
the distribution and relationship of how objectives across the networked systems are
satisfied. Each system provides capabilities as services that can collaborate to complete
business objective. The systems provide and consume these services so the functional
dependencies between systems are identified as service interfaces. This is the view that
will often be used when communicating the solution back toward the customer and/or
system users.
While creating this view, the architect should be asking questions such as:
1. What functionality needs to be implemented to support the business capabilities?
2. How will users interact or work together with the systems to deliver the business
capabilities?
3. How will systems interact with one another to deliver the business capabilities?
4. Are there any systems with too many dependencies (i.e. high impact if failure
occurs)?
Following on from the earlier business rule regarding safety nut tightening, the systems
and services required to satisfy the business capability are shown in Figure 3.11. Of
course, these functional capabilities can also be reused for other business capabilities.
Figure 3.11: The functional view consists of the component systems and capability services with
dependency identification.
Information view
The Information view captures the concerns of what form data
takes and meaning it has.
The functional capabilities and dependencies have created service exchange paths
between systems. It is the business capabilities and user interaction requirements that
3.7. The Architectural Style 47
have motivated the systems and service exchanges. The data that is generated by the
execution of a capability or required in order to perform a capability has not been defined
yet. The Information view captures what data is required/generated, what the meaning
of the data is and what format it takes.
While creating this view, the architect should be asking questions such as:
1. What does the capability do? How does its execution effect the meaning of the
information?
2. Which capability produces the information?
3. Which capability requires the information?
4. What are the dependencies between information?
The proposed architectural style does not dictate the elements or methods used in
the view. Any ontological language could be used to describe the information or an
external ontology could be referred to here. It is also possible to perform conceptual
data modelling here using techniques such as entity-relationship modelling [89]. In Figure
3.12 an example Entity Relationship Diagram (ERD) is shown. It defines the conceptual
model of the information required to make use of the nut tightening capability. It also
defines the information generated by execution of the nut tightening capability. The
generated information is shown to relate to the product entity. This view has defined
what information is used by the functional capabilities and where the source and sinks of
the information are. Using this model developers can understand exactly the information
that must be stored for the safety nut tightening of the wheel alignment.
Figure 3.12: This Entity Relationship Diagram shows the information used and generated by
the Nutrunner as it executes the nut tightening capability. It also relates the results back to
product being worked upon.
48 Architecting IIoT Software
Communication view
The Communications view captures concerns of system interac-
tion patterns and methods.
Until now, the individual systems have been identified. Their capabilities and service
interfaces defined. The format and meaning of information exchanged between systems
specified. But how does the information move between systems? In which direction is
service exchange initiated? The communications view captures how information is passed
between the systems.
While creating this view, the architect should be asking questions such as:
Integration view
The Integration view captures concerns of interfacing between
the virtual and real worlds.
Thus far, the individual systems interconnected with each other through communica-
tions buses with known information formats and capabilities. Due to the nature of IIoT
applications, many of the systems will have sensors and actuators to perform real world
interactions. The integration view captures what hardware interfaces are required by the
systems in order to interact with the physical world.
While creating this view, the architect should be asking questions such as:
Figure 3.13: The Communication view specifies the communications patterns and methods as a
shared bus between systems.
Figure 3.14: The Integration view highlights important real world or legacy equipment integra-
tion points.
Asset view
The Asset view captures concerns of physical world devices.
System requirements for integration and communication have been formed. The Asset
50 Architecting IIoT Software
view is where architects will identify important devices for running the systems on. In
addition legacy devices are shown here with the legacy communications bus defined in
the integration view. Additionally, the asset view should show networking appliances
and network interfaces such as Ethernet, Wi-Fi, IEEE 802.15.4 or etc.
While creating this view, the architect should be asking questions such as:
Figure 3.15: The Asset view is communicates hardware specifics - this view can be very detailed,
or leave more flexibility to designers.
3.7. The Architectural Style 51
System view
The System view captures concerns of a single system with all
the software layers present.
Thus far the architecture style has focused on network views of the IIoT application.
This is no accident, as IIoT applications are highly distributed. For each of the IIoT sys-
tems defined so far this view must be developed. The system view is an implementation
view that can be utilized for estimation and development. Therefore, it will be utilized by
software and hardware engineers in building the system. It must capture the system con-
cerns related to: asset, integration, communication, information, function and business.
This view will capture the system architecture, it could utilize a object oriented, layered,
pipe and filter, event driven or agent architectures etc. It it could be some suitable com-
bination. For example, a suggestion could be use a hybrid of two styles; 1) a layered
style captures the asset, integration and communication concerns and 2) an object style
captures the information, function and business concerns. Hardware Abstraction Layers
(HAL) create a platform upon which high layer objects can be manipulated. The layered
style indicates clear dependencies and responsibilities. Object oriented modelling pro-
vides flexible definitions and changing interactions between components. This is shown
in Figure 3.16.
Figure 3.16: The System view is utilized by the architect to communicate a model of the system
that can guide implementation.
52 Architecting IIoT Software
Chapter 4
Contribution
The appended publications are in chronological order and will be presented in this
manner. This section presents a summary of the paper and the contributions made by
the author.
Contribution: The author was responsible for surveying frameworks and reporting
on findings.
53
54 Contribution
Paper E: Service Oriented Architecture Enabling the 4th Generation District Heat-
ing
Authors: Jan van Deventer, Hasan Derhamy, Khalid Atta and Jerker Delsing
Published in: The 15th International Symposium on District Heating and Cooling,
2016, Seoul.
Summary: This paper proposes a new approach for the next generation of district
4.1. Summary of Appended Publications 55
heating. The approach combines SOA-based data services with advanced optimization
services. This provides a new platform for achieving efficiency gains which results in cost
savings.
Contribution: The author has contributed toward applying the Arrowhead framework
and describing how it operates within the district heating use case.
return the full results to the smart product. It uses SOA and SoS so that it is able
to maintain a local archive or send the results to the cloud for permanent storage. It
is based on work with Volvo Trucks Corperation for the Far-Edge and Productive 4.0
projects.
Contribution: The author was responsible for the research, designing the workflow
manager and executor architecture, prototyping the demo systems, testing and writing
the paper.
The thesis presents findings in the domain of IIoT regarding communication protocol
interoperability, information extraction, decentralized MES, software capability decom-
positions; and proposes an architectural style based on SOA and SoS principles.
The overall research approach has been through cycles of identification, concept
proposition, prototype implementations and final reflection and retrospective analysis.
This approach is repeated through small projects, each building on the result of the
previous. The first project was tackling the challenge of communication interoperability
within IIoT. The solution involved dynamic translators being injected to the communi-
cation path. The dynamic aspects of protocol translation were identified to be useful
for information extraction in IIoT Systems. Building on this, system composition was
proposed to create SoS and further developed on a use case. This then lead to propos-
ing an approach for workflow management within IIoT. This ultimately resulted in the
proposed architectural principles and style.
The next sections summarizes the research contributions and outlines a vision toward
the future.
5.1 Conclusion
The contribution of this thesis is towards enriching the body of knowledge within the
domain of software architecture in IIoT. The challenges and requirements of IIoT on
software are studied and analysed, resulting in a proposed set of architectural principles.
Several prototype implementations have been developed. All findings contribute to the
research community, where they can be further utilized.
IIoT applications are made up of highly interconnected and ubiquitous computing sys-
tems that are embedded into their physical environment. An IIoT application is likely
to evolve in a piecemeal manner as each constituent system goes through a distinct life
cycle. Also, technological advancements are rapid and so change is frequent. Software
without adherence to some form of clear architecture leads to brittle applications that
resist change. Therefore, a well defined architecture is required to guide software devel-
57
58 Conclusion and Outlook
opment and integration. On the other hand, a highly prescriptive software architecture,
without clear rationale, results in a resistance towards adoption of new technologies.
The proposed architectural style utilizes systems, SoS and SOA principles. The style
promotes clear rationale behind design decisions at all levels. It provides a framework
for introducing change and handling system fragmentation and complexity.
When making changes, the designer should be guided by layers of architecture. Where
required, the designer can break compliance with the reference architecture, so long as the
architectural style is adhered to. The style can also be modified so long as the principles
are then preserved. However, architectural principles must avoid changes or there must
be a strong rationale behind the alterations. Changes at this layer of the architecture
will have wide impact on all application systems.
The research contributions are summarized below.
Q1 What are the possibilities and limitations for efficient and interoperable multi-
protocol IoT networking?
The results of this thesis show that it is possible to have multi-protocol interoperability
without the use of fixed gateways or proxies. The use of an on-demand multi-protocol
translator allows for seamless communication between two systems incapable of direct
messaging. It is possible to maintain advanced error detection, trust bridging and QoS
monitoring. The options become severely limited when the Confidentiality, Integrity
and Availability (CIA) model of security is applied to translation enabled information
exchange. Some of these issues can be mitigated if the translator can be treated as
a trusted peer. This adds the benefit of being able to build trust between two peers
who do not trust each other, but both trust the translator. In addition, the use of
standards, such as IETF COSE for object level encryption, can hide the contents of
sensitive information from the translator. The translation enabled information exchange
can introduce communication delay. The internal architecture of the proposed translator
has been designed so to reduce this delay. There are further possibilities to reduce
this delay by using a pipe-line architecture throughout the software library layers. This
research was presented in papers B, C and F.
Q2 What are the possibilities for ancillary information extraction within SOA-based
heterogeneous networks?
This thesis shows that it is possible to dynamically extract information from its source
within an IIoT network. Based on the usage of SOA and SoS theory, it is possible to 1)
visualize the IIoT elements, 2) select data sources, data processing and data sinks and 3)
re-orchestrate communications paths. A graph model is proposed to capture the SOA and
SoS elements. Where a graph path is disconnected between collaborating systems a local
bridge can be injected, connecting the systems. The local bridge could be a translator (in
the communications path), or a data manipulator (i.e. filter or trigger in the functional
path). The results show the feasibility of dynamic manipulation or creation of an SoS
during run-time by domain engineers. Because of local information access, centralized
datastores (warehouses) are avoided. This research was presented in papers G and I.
5.1. Conclusion 59
Interoperation of SoS is dependent on the goals of individual systems and the overarching
SoS goals. This thesis approaches this challenge by defining three systems operating at
the edge to co-ordinate workflow management, execution and tracking. An intelligent
product is responsible for tracking progress through the manufacturing process. A work-
flow manager and a workflow executor exist in each workstation. The workflow manager
is responsible for interaction with the intelligent product and co-ordination with inven-
tory and execution systems. The workflow executor is responsible for interaction with
equipment and the step wise execution of the operations state machine. The capability
decomposition guided by architectural principles promotes system reuse across differ-
ent workstations and supports system development as workstation requirements change.
This research was presented in paper H.
Q4 What are the underlying architectural principles and styles that can guide design of
Industrial Internet of Things applications?
During this research, several IIoT applications have been developed applying and
refining individual principles that contributed to the proposed architectural style. De-
veloped interoperability solution is able to support multi-protocol IIoT networking with
run-time service (translator) injection. Through implementation of IIoT applications
such as Wheel loader, UWB localization, Festo conveyor belt and Nutrunner, ideas and
principles for information extraction were developed and tested. The proposed graph
model allows information extraction at its source within SOA-based heterogeneous net-
works. The developed SoS graph model of a wheel loader correctly responds to Open
60 Conclusion and Outlook
Cypher graph queries, building the functional, communication and security paths. The
developed system automatically generates SoS compositions. This lessens required IT
competency on the factory floor, eliminates human error and reduces development time.
Furthermore, the proposed approach can be used to validate SoS specifications. For
instance, functional paths will indicate whether all required systems and services are
present in the current network. Principles of application capability decomposition were
applied to autonomous workstation use case. This work tackled the challenges imposed
by business driven requirements to manufacturing; it is an effort towards decentralized
MES. The proposed workflow manager and executor enables workstations to handle local
MES functionality and reduces integration efforts on the factory floor. The workstation
with these capabilities becomes self-contained, requiring near zero local infrastructure
support (MES, ERP, PLM, Internet). It lessens reliance on shared networks that can
be exposed to degraded performance impacting operation of the factory floor. Within
the settings of Industry 4.0, self-contained workstations can be relocated to remote sites
with minimum integration efforts.
The result of this research can be utilized by industry and academia to create soft-
ware architecture for IIoT applications. When compared with mechanical or electrical
engineering, software engineering is extraordinarily constraint free. However, the IIoT
introduces architectural constraints such as processing power, communications and bat-
tery life among others. Choices of where logic is implemented can sometimes appear
arbitrary. Therefore, software architecture is often a recommended practice; imposing
guidelines on design decisions regarding distribution and placement of logic. The con-
ducted research approaches this challenge by studying the rationale behind application
capability decomposition within IIoT. The thesis proposes an architectural style that
promotes design of systems that can handle the complexity imposed by IIoT.
The body of knowledge acquired during this research can be utilized in teaching
a new generation of engineers (from any domain). It is crucial for students as future
engineers and software architects to understand architectural thinking and principles.
Developers must be able to approach their implementation with understanding of the
architectural vision. This hopefully will result in a consistent, correct and stable software
implementation.
intelligent systems will need to negotiate content alongside interfaces. Self organisation
rather than managed organisation will be required for a functioning IIoT. It would be
interesting to see potential of machine learning and artificial intelligence, particularly in
self organizing systems within IIoT.
Another area of future work can be directed towards timing issues in SOA-based IIoT
applications. Due to the messaging overhead, SOA applications generally experience
longer delays. Timing and delay are a relative measure and a matter of perspective; an
additional delay need not be seen as an issue. It would be interesting to see, how applying
the principle of first person perspective may affect the design of a system, such that it is
less sensitive to delays introduced by other systems.
An interesting research direction world be in native platform support for SOA-based
IIoT applications. The proposed architectural principles may drive design of such plat-
forms supporting high level human oriented abstractions.
In computing, trust is usually modelled as a binary characteristic. A challenging task
would be to investigate the potential of IIoT systems to operate within a continuum of
trust. How do the architectural principles assist in creating graded trust collaboration?
What would the technology look like to achieve this?
It is possible that, government regulation could be required for architectural planning
of future software systems with certified sign-off in much the same manner as housing
architecture.
62 Conclusion and Outlook
References
[3] “The ’only’ coke machine on the internet,” Carnegie Melon University. [Online].
Available: https://www.cs.cmu.edu/∼coke/history long.txt
[4] B. Scholten, The road to integration : a guide to applying the ISA-95 standard in
manufacturing. Research Triangle Park NC : ISA, 2007.
[5] E. A. Lee, “Cyber physical systems: Design challenges,” in 2008 11th IEEE In-
ternational Symposium on Object and Component-Oriented Real-Time Distributed
Computing (ISORC), May 2008, pp. 363–369.
[7] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the
internet of things,” in Proc SIGCOMM, 2012.
63
64 References
[14] F. P. Brooks, Jr., “The mythical man-month,” SIGPLAN Not., vol. 10, no. 6, pp.
193–, Apr. 1975. [Online]. Available: http://doi.acm.org/10.1145/390016.808439
[15] A. Cockburn and J. Highsmith, “Agile software development, the people factor,”
Computer, vol. 34, no. 11, pp. 131–133, Nov 2001.
[19] J. Delsing, Ed., Arrowhead Framework: IoT Automation, Devices, and Maintenance.
CRC Press, 12 2016. [Online]. Available: http://amazon.com/o/ASIN/1498756751/
[23] J. Delsing, J. Eliasson, J. van Deventer, H. Derhamy, and P. Varga, “Enabling iot
automation using local clouds,” in 2016 IEEE 3rd World Forum on Internet of
Things (WF-IoT), Dec 2016, pp. 502–507.
[25] “The Fourth Industrial Revolution - What it means and how to respond. World Eco-
nomic Forum,” Jan 2018. [Online]. Available: https://www.weforum.org/agenda/
2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
[26] W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and chal-
lenges,” IEEE Internet of Things Journal, vol. 3, no. 5, pp. 637–646, Oct 2016.
[27] W. M. van der Aalst and K. M. van Hee, Workflow Management: Models, Methods,
and Systems. The MIT Press, 2000.
[30] B. Tjahjono, C. Esplugues, E. Ares, and G. Pelaez, “What does industry 4.0 mean
to supply chain?” Procedia Manufacturing, vol. 13, pp. 1175 – 1182, 2017. [Online].
Available: http://www.sciencedirect.com/science/article/pii/S2351978917308302
[32] J. Steding. (2018) Supply chain management trends in industry 4.0. [Online].
Available: https://www.soneticscorp.com/4-supply-chain-management-trends/
[33] Frost and Sullivan. (2017) Digital supply chain - iiot impact on manufacturing
supply chain. [Online]. Available: https://ww2.frost.com/frost-perspectives/digital-
supply-chain-iiot-impact-manufacturing-supply-chain/
66 References
[34] i Scoop. (2018) Logistics 4.0 and smart supply chain management in
industry 4.0. [Online]. Available: https://www.i-scoop.eu/industry-4-0/supply-
chain-management-scm-logistics/
[35] W. M. P. van der Aalst, A. H. M. ter Hofstede, and M. Weske, “Business pro-
cess management: A survey,” in Proceedings of the 1st International Conference on
Business Process Management, volume 2678 of LNCS. Springer-Verlag, 2003, pp.
1–12.
[37] M. Weyrich and C. Ebert, “Reference architectures for the internet of things,” IEEE
Software, vol. 33, no. 1, pp. 112–116, Jan 2016.
[38] H. P. Breivold, “A survey and analysis of reference architectures for the internet-of-
things,” ICSEA 2017, p. 143, 2017.
[39] “Microsoft azure iot reference architecture,” Microsoft Corporation, Tech. Rep.,
2016.
[40] “The intel iot platform - architecture specification white paper internet of things,”
Intel Corporation, Tech. Rep., 2016.
[46] J. C. Jones, Design Methods, 2nd Edition. New York, NY, USA: John Wiley &
Sons, Inc., 1992.
[49] “That ’internet of things’ thing,” Tech. Rep. [Online]. Available: http:
//www.rfidjournal.com/articles/view?4986
[50] R. Minerva, A. Biru, and D. Rotondi, “Towards a definition of the internet of things
(iot),” Tech. Rep., May 2015.
[54] J. Delsing, “Local cloud internet of things automation: Technology and business
model features of distributed internet of things automation solutions,” IEEE Indus-
trial Electronics Magazine, vol. 11, no. 4, pp. 8–21, Dec 2017.
[55] E. D. Knapp and J. T. Langill, Industrial Network Security: Securing critical in-
frastructure networks for smart grid, SCADA, and other Industrial Control Systems.
Syngress, 2014.
[56] D. Meltzer, “Securing the industrial internet of things,” Tech. Rep., June 2015.
[57] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, and M. Hoffmann, “Industry 4.0,”
Business & Information Systems Engineering, vol. 6, no. 4, pp. 239–242, Aug 2014.
[Online]. Available: https://doi.org/10.1007/s12599-014-0334-4
[59] P. B. Kruchten, “The 4+1 view model of architecture,” IEEE Software, vol. 12,
no. 6, pp. 42–50, Nov 1995.
[62] H. Zimmermann, “Osi reference model - the iso model of architecture for open
systems interconnection,” IEEE Transactions on Communications, vol. 28, no. 4,
pp. 425–432, April 1980.
[63] T. Erl, SOA Principles of Service Design (The Prentice Hall Service-Oriented Com-
puting Series from Thomas Erl). Upper Saddle River, NJ, USA: Prentice Hall PTR,
2007.
[64] S. Newman, Building Microservices: Designing Fine-Grained Systems, 1st ed.
O’Reilly Media, February 2015.
[65] “The single responsibility principle - 8th light.” [Online]. Available: https:
//8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html
[66] T. O’Reilly and J. Battelle, “Web squared: Web 2.0 five years on,” Tech. Rep., 2009.
[67] T. O’Reilly, What is Web 2.0? Design Patterns and Business Models for the Next
Generation of Software. O’Reilly Media, September 2009.
[68] D. Merrill, “Mashups: The new breed of web app,” Tech. Rep., 2006.
[69] “Real-time twitter trending hashtags and topics.” [Online]. Available: https:
//www.trendsmap.com/
[70] “Node-red.” [Online]. Available: https://nodered.org/
[71] “Ifttt helps your apps and devices work together.” [Online]. Available:
https://ifttt.com/
[72] S. Bussmann, N. R. Jennings, and M. Wooldridge, Multiagent systems for manufac-
turing control: A design methodology. Germany: Springer-Verlag, 2004.
[73] M. Wooldridge, Introduction to Multiagent Systems. New York, NY, USA: John
Wiley & Sons, Inc., 2001.
[74] S. D. J. McArthur, E. M. Davidson, V. M. Catterson, A. L. Dimeas, N. D. Hatziar-
gyriou, F. Ponci, and T. Funabashi, “Multi-agent systems for power engineering
applications - part i: Concepts, approaches, and technical challenges,” IEEE Trans-
actions on Power Systems, vol. 22, pp. 1743–1752, 2007.
[75] K. L.-B. Yoav Shoham, Multiagent Systems - Algorithmic, Game-Theoretic and Log-
ical Foundations. Cambridge University Press, 2009.
[76] “Department of defense dictionary of military and associated terms,” Washington,
DC, USA, Tech. Rep., 2015.
[77] ISO/IEC/(IEEE), “ISO/IEC 42010 (IEEE Std) 1471-2000 : Systems and Software
engineering - Recomended practice for architectural description of software-intensive
systems,” 2007.
References 69
[78] “Systems engineering guide for systems of systems,” Washington, DC, USA, Tech.
Rep., 2008.
[83] P. Reed, “Reference architecture: The best of best practices,” Tech. Rep., 2002.
[84] M. Bell, SOA Modeling Patterns for Service Oriented Discovery and Analysis. Wiley
Publishing, 2010.
[85] T. Erl, SOA Design Patterns, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall
PTR, 2009.
71
72
Paper A
A Survey of Commercial
Frameworks for the Internet of
Things
Authors:
Hasan Derhamy, Jens Eliasson, Jerker Delsing, Peter Priller
73
74
A Survey of Commercial Frameworks for the
Internet of Things
Abstract
In 2011 Ericsson and Cisco estimated 50 billion Internet connected devices by 2020,
encouraged by this industry is developing application frameworks to scale the Internet of
Things. This paper presents a survey of commercial frameworks and platforms designed
for developing and running Internet of Things applications. The survey covers frameworks
supported by big players in the software and electronics industries. The frameworks are
evaluated against criteria such as architectural approach, industry support, standards
based protocols and interoperability, security, hardware requirements, governance and
support for rapid application development. There is a multitude of frameworks available
and here a total 17 frameworks and platforms are considered. The intention of this
paper is to present recent developments in commercial IoT frameworks and furthermore,
identify trends in the current design of frameworks for the Internet of Things; enabling
massively connected cyber physical systems.
1 Introduction
For more than a decade the Internet of Things (IoT) has boosted the development of
standards based messaging protocols. Recently, encouraged by the likes of Ericsson and
Cisco with estimates of 50 billion Internet connected devices by 2020 [1], attention has
shifted from interoperability and message layer protocols towards application frameworks
supporting interoperability amongst IoT product suppliers.
The IoT is the interconnection of ubiquitous computing devices for the realization of
value to end users [2]. This definition encompasses ”data collection” for the betterment
of understanding and ”automation” of tasks for optimization of time. The IoT field has
evolved within application silos with domain specific technologies, such as health care, so-
cial networks, manufacturing and home automation. To achieve a truly ”interconnected
network of things” the challenge is enabling the combination of heterogeneous technolo-
gies, protocols and application requirements to produce an automated and knowledge
based environment for the end user.
In [3], Singh et al. elaborate on three main visions for the IoT: Internet Vision,
Things Vision and Semantic Vision. Depending on which vision is chosen the approach
taken by a framework will differ and provide a better result for those applications. As
surveyed by Perera et al. in [4], there are many existing IoT products and applications
available. These however are based on proprietary frameworks which are not available
75
76 Paper A
for development of customized applications. The frameworks presented in this survey are
all targeted as a basis for development of IoT applications.
This paper presents a survey of highly regarded commercial frameworks and platforms
which are being used for Internet of Things applications. Many of the frameworks rely
on high level software layers to assist in abstracting between protocols. The high level
software layer provides flexibility when interconnecting between different technologies
and is well suited for working in cloud environments. In some cases the frameworks look
into standardizing interfaces, defining a software service bus or simply opting to choose
a single network protocol and set of application protocols. This is further discussed as
follows; in Section 2 introduces the concept of frameworks and defines three categories
of frameworks used in this survey. Sections 3 and 4 then introduces the frameworks and
platforms studied, grouped by application area. In Section 5 a discussion of a comparative
analysis of the frameworks and platforms is presented. The survey finishes with a few
concluding remarks in Section 6.
Many frameworks take a data centric or data driven approach. Utilizing a global
cloud, they focus on enabling collation, visualization and analytics on data. This archi-
tecture is well suited for applications such as asset tracking, logistics and predictive main-
tenance [5]. In some cases the framework will allow creation of local hosted instances [6]
but do not detail a method of interconnecting multiple cloud instances within the frame-
work. This approach is suitable for providing data as a service but will generally leave the
implementation specifics of the end-points to application developers. The framework sim-
plifies the operation of end-points to only feeding data back to a central repository which
will then implement complex security authorizations and usage tracking. The increasing
computing power at the edge of networks is not leveraged in such frameworks and can
introduce inefficiencies in bandwidth and latency. Figure 1 illustrates the concept of a
global cloud through which IoT applications connect and communicate.
A smart objects approach makes the endpoints active participants within the frame-
work. The end points are included as key aspects of the framework which means the
focus of the framework is on interconnecting the end-points. This approach is well suited
for distributed automation tasks which require a high level of device independence, such
as home and building automation and manufacturing.
While the devices can be very small they are often directly addressable from the
Internet, see [7]. Because of this focus on automated end points and functional behavior,
many of these frameworks do not go into the specifics of cloud integration and so do
not provide good support for data collection. The data is produced by end points and
consumed by end-points, which within this context will usually have some predefined
understanding of the end-point pairing. The implementation of the data in the cloud is
left to each IoT designer with minimal support from the framework.
In both of the approaches mentioned, many features such as end-to-end security and
layered interoperability suffer due to ad-hoc development either at the end-points or
within the cloud. A third approach becoming more prevalent, is taking into account
the need to satisfy real-time automation requirements while not hindering the value of
78 Paper A
semantic big data and data analytics. Figure 2 illustrates the local cloud concept with
independent local operation and shared global functionality.
3 Frameworks
3.1 IoT Frameworks for home automation
Home automation has been a key area for development of IoT-based applications. With
the reduction in costs of manufacturing of IoT enabled devices, there are three major
frameworks trying to gain support from device manufacturers and application developers.
IoTivity - backed by Intel and Samsung, AllJoyn - backed by Qualcomm, LG and Sony,
and Thread - back by ARM and Google.
IPSO Alliance
specification builds on top of IETF CoRE standards such as 6LoWPAN, CoAP and
SenML [9].
IoTivity
The IoTivity framework is developed by the Open Interconnect Consortium initially
targeting IoT in smart homes and looking to further expand to other IoT silos. It is
based on CoAP and its key building blocks are the Connectivity Abstraction (CA) layer
and a Resource Introspection (RI) layer. The framework is being extended to HTTP and
other communication protocols are supported through protocol plug-ins [10].
The IoTivity stack is shown in Figure 3. The thin block stack supports resource
constrained devices. IoTivity makes use of D/TLS for security. Another interesting
feature of IoTivity is Soft Sensor concept [10], which supports processing raw sensor
data at intermediate or edge nodes.
As described in the Things Manager [10], IoTivity adopts a similar approach to the
IPSO Alliance by using a object and resource based model. LWM2M and IoTivity find
convergence here with the IPSO Alliance model and possible opportunity for interoper-
ability.
AllJoyn
The AllJoyn framework, developed by the AllSeen Alliance, is designed for enabling
interoperability for home automation [11] and industrial lighting [11] applications. The
AllJoyn core operates as a software bus [12] between devices. Devices must implement
a bus attachment responsible for message marshaling and serialization. Constrained
devices use a thin library [12], and do not have a bus attachment, so must connect to an
AllJoyn router. Thin devices work at the messaging level, while full devices communicate
by remote procedure calls. Running such frameworks introduces overheads which limit
real-time performance and participation of resource constrained and low power devices.
AllJoyn core library provides authentication and encryption for end-to-end secu-
rity [12]. Authentication is provided by means of Simple Authentication and Security
Layer (SASL) security framework defined by the D-Bus specification [13]. It supports
both point-to-point ”session” keys and point-to-multi-point ”group” keys. Thin Client
80 Paper A
devices [12] don’t support this security. It is transport agnostic [12] and is currently
running on WiFi, Ethernet, serial, and Power Line.
Thread
The Thread groups framework defines a protocol stack based on Nest’s early implemen-
tation of the smart thermostat; it uses the IETF IP stack, UDP and builds up additional
security and commissioning functionality [14]. The Thread protocol can address devices
directly and is able to perform peer-to-peer communications in a mesh network. It is
seen as an evolution from the traditional ZigBee stack to an IP based stack [14].
Being built atop the standard IEEE 802.15.4 radio allows them to make use of already
mass produced ZigBee chips [14]. There is limited information available as Thread is only
available to member companies and the Thread group will be providing a certification
process.
The framework defines a resource and object model whereby the LWM2M client holds
objects which each contain some resources. These objects can be instantiated by either
a remote LWM2M server or by the client itself. Once instantiated the objects can be
operated upon via the interfaces mentioned above.
As shown in the diagram of the LWM2M stack the security layer is handled with
DTLS [15]. It is , but not recommended, possible to send the data without security.
Access control is specified at the object layer with the use of an Access Control Object
Instance [15]. They define what operations are allowed on a per Object Instance case. So
within a given LWM2M client a single object instance will hold resources with different
access control rights depending on the LWM2M server performing the access.
82 Paper A
The framework unifies the CoAP server and client model into LWM2M server and
LWM2M client which allows bi-directionally RESTful communications [15]. For example,
a LWM2M server is able to POST to a LWM2M client and vice versa. This means the
application development architecture has a more flexible concept of client/server.
4 Platforms
4.1 Cloud-based IoT Platforms
Centralized platforms offer a simple method of integrating sensors into IoT applications.
By employing a global cloud approach, Cumulocity, ThingWorx and Xively provide an
integration platform for organizations to build IoT applications on. They recognize that
many commercial organizations will be interested in gaining value from the data provided
by embedded sensors.
Cumulocity
In the Cumulocity platform, sensor nodes are clients which connect to the cloud through a
RESfull HTTPS API. Sensor nodes are modeled as objects with properties and methods
for access and manipulation. Commands are pulled by devices from the Cumulocity
server [18]. Depending on the pull frequency there will be a delay from the issued
command until the device receives it. Constrained networks not operating on HTTPS
use an ”agent” [18] to connect to the Cumulocity server. A server side ”event” language
syntactically resembling SQL scripting [18] loads triggers to be performed in reaction to
events. Cumulocity only supports RESTful HTTP/S.
4. Platforms 83
ThingWorx
The ThingWorx platform targets application integration through model driven develop-
ment. It composes services, applications and sensors as data sources and interconnects
these through a virtual bus. The framework is transport agnostic and has been ported
to run with CoAP, MQTT, REST/HTTP and Web Sockets [6]. Treating other clouds as
data sources ThingWorx integrates with other cloud providers such as Xively and web
services such as Twitter and weather services. Communication between devices, services
and applications must be routed through the ThingWorx bus, thereby not enabling peer
to peer communication. Using a Mashup builder, organizations are able to quickly con-
nect data sources to dashboards, for tracking and monitoring assets and gathering data
from many data sources to perform data analytics in real-time [19].
Xively
Third in this group is the Xively platform, formerly known as Pachube. This platform
similar to the previous two provides a central message bus which routes messages between
devices of different protocols. The components which make up the Xively architecture
can be seen in Figure 6.
The message bus combined with the Xively API for MQTT, HTTP, and Web Sockets
to provide an interoperability layer. It is a data driven platform with ability to give fine
grain access to data streams and data feeds. Based on the client server model, they have
a centralized method of device configuration where each device has a virtual presence
and when a device comes online it uses its serial number and some form of mutual
authentication to receive its configuration parameters setup on the Xively server. The
framework has additional services which allow for Business Services, Systems Integration
and Business Opportunities for companies and assist with governance of the network.
84 Paper A
IzoT
The IzoT platform is made up of a communication stack intended for peer-to-peer com-
munications [20] consisting of several proprietary high level protocol services which run
on top of UDP [20]. Supporting priority messaging and end-to-end acknowledgements
on unicast and multicast messages, the communication stack can support multiple si-
multaneous messaging on unconstrained devices. It has built in discovery and interface
publishing, and can run on many networks including 6LoWPAN, free topology twisted
pair, WiFi and potentially any medium which can support UDP sockets [20].
IzoT supports symmetric and asymmetric key encryption and authentication. Using
a proprietary communication stack limits the ability for IzoT to be adopted widely for
general IoT applications.
ThingSquare
The ThingSquare platform is founded from the development of the Contiki OS and
is strictly based on IETF communications stacks. Their offering includes cloud based
device governance and boot-strapping, but is limited in terms of cloud based application
integration and data analytics. The focus of the framework is enabling automation,
control and monitoring of smart objects through the Internet. Contiki OS boasts the
smallest IP stack according to ThingSquare [21]. There operating system has been ported
to run on many of the current IoT micro-controllers [21].
The ThingSquare framework uses cloud based services for device management, au-
thentication and authorization [21] of new devices to the network. The ThingSquare
framework will only allow authorized users and devices to register and control other
ThingSquare devices. A simplified network is shown in Figure 7, along with the network
stack supported.
Intel
Intel have partnered with Wind River and McAfee to produce an IoT framework which in-
cludes hardware for things, intelligent gateways, cloud and Platform as a Service [22] [23].
Intel’s hardware technologies, Wind River’s Operating Systems (OS) and McAfee’s secu-
rity products can be utilized in different layers of the IoT from embedded to the cloud.
VxWorks [24] can scale down to a 20kb foot print for use in constrained embedded
systems, while Wind River’s gateway OS, based on Linux, can support many application
environments including Lua, Java, and OSGi [23]. Whitelisting binaries means that
binaries without the correct signature cannot be executed on a device [23]. Role-based
access control is used to provide a learning mode to generate security policy rules [25].
Microsoft
Microsoft supports the IoT at three layers. The Microsoft Azure Cloud provides an excel-
lent platform for developing and integrating distributed applications using its proprietary
Enterprise Service Bus. Device connectivity and governance is supported by Microsoft
Azure Intelligent Systems Service (ISS) [26]. Microsoft StreamInsight is a platform for
in-memory data analytics and processing [27]. It allows IoT applications to process data
without the latency involved with traditional databases. It can be run as a local Web
service or in the cloud as a hosted service in Azure. The device layer is supported by
Microsoft Windows embedded [28] and the .net microframework (.netmf) [29].
Microsoft Research have developed the HomeOS platform [30] as a multi-protocol
home automation server. It is a virtual OS running on a COTS computer and providing
inter-connection between multi-vendor COTS devices [31]. Although not yet, commer-
cially available, it occupies the same space as AllJoyn, IoTivity and Thread [31].
IBM
There are many offerings from IBM and by combining them it is possible to run an
end-to-end industrial or consumer IoT systems with MQTT-based communications and
enterprise middleware. The key offerings are BlueMix application server, WebSphere en-
terprise integration middleware, MobileFirst application development platform, Informix
database, and the MessageSight MQTT broker. These different IBM products are shown
in Figure 8.
IBM BlueMix is based on Cloud Foundry [32] application server, offering additional
management capabilities. The IBM IoT Foundation is hosted on Bluemix and provides
86 Paper A
governance of IoT devices. IBM WebSphere MQ offers proven and robust enterprise ap-
plication integration and supports MQTT networks by integrating WebSphere with the
IBM MessageSight Appliance [33]. Previously known as IBM WorkLight Foundation [34]
MobileFirst is a platform enabling machine to machine connections through mobile de-
vices. It is a platform for web application and native mobile application development
and provides hosting for those applications.
IBM MessageSight is a communications appliance which can handle high volumes of
MQTT communications. Built with performance in mind, the IBM MessageSight appli-
ance can process over 350K MQTT v3.1 messages per second and can publish over 15M
messages per second to consumer applications [35]. The IBM MessageSight appliance
can be run in a Virtual environment [36].
5 Discussion
The frameworks and platforms have been introduced and a high level over view of their
key features has been described. Now we will discuss the frameworks within the context
of the evaluation criteria. The evaluation criteria are; the framework approach, industry
support, underlying protocols, security, applicability to constrained devices and support
for rapid application development.
Architectural approaches were introduced in section 2.1 and the frameworks can be
categorized as in Table 1. This categorization is indicative of the target application space
for the frameworks.
Table 1: The three approaches discussed against the frameworks and platforms studied
Approach Frameworks, Platforms and Protocols
Global Cloud Cumulocity, Xively, ThingWorx, IBM, Microsoft, Intel, LWM2M
Peer to Peer IPSO, Thread, Thingsquare, IzoT, SEP 2.0, AllJoyn, IoTivity
Local Cloud Arrowhead
Cumulocity, Xively, ThingWorx, IBM, Microsoft and Intel are serving the global cloud
5. Discussion 87
approach to running IoT Applications. They provide the hosting platforms and applica-
tion API for interacting between devices from applications running in the cloud. This
is a well-known model used for business systems which prefer a centralized application.
LWM2M joins this group as a device governance framework with a centralized approach.
IPSO, Thread, ThingSquare, IzoT, SEP 2.0, AllJoyn and IoTivity have approached IoT
application development from a device level and support a high level of peer-to-peer
operation. This approached serves their customers well in home automation and device
management. Within the local cloud category, Arrowhead sits alone. The unique ap-
proach of this framework is the support for integration of applications between secure
localized clouds. The Quality of Service, Security and scalability of industrial automation
has necessitated this approach.
The qualities of peer-to-peer communication, mesh network support, 6LoWPAN and
low power make the ThingSquare attractive for running edge of network in conjunction
with cloud platforms such as Xively, Cumulocity or ThingWorx.
Table 2 shows a few of the larger IoT framework backers against the frameworks in
this paper. Platforms are not shown here as they are usually supported by the platform
provider only.
LWM2M has support from many larger organizations as it falls under the OMA
umbrella of specifications. Frameworks with support from large organizations mitigates
sudden end of support for a chosen framework. It can be seen that Microsoft and Intel,
while providing an IoT platform, are also members of the IoT framework development.
88 Paper A
The number of member organizations within each framework also indicates the confidence
industry has in each framework.
The centralized frameworks mentioned earlier offer message protocol flexibility and
will usually support MQTT, REST and sometimes CoAP and XMPP. Table 3 shows the
frameworks against common protocols. Under the ’other column’ are either proprietary
protocols or less common protocols such as DDS.
To mention open source frameworks and the protocols they rely on is important to
developers looking for flexibility in choosing libraries, vendor platforms and interoper-
ability. Many of the frameworks are proprietary, but most support at least one open
source messaging protocol. IPSO Alliance, OMA-LWM2M, AllJoyn, IoTivity and Ar-
rowhead are open-source and based on open-source technologies. IBM and Microsoft are
proprietary and the Thread specification is only available for member companies. The
platforms sch as ThingWorx, ThingSquare, Xively and Cumulocity are proprietary and
so moving an application from one to another would be a costly exercise.
In table 4 the frameworks and protocols in this survey are categorized by layer. Most
of the data centric frameworks sit in the application layer, while many of the control
centric frameworks are in the messaging layer. In this case messaging layer is referring
to a layer above transport while still not offering a rich application layer support.
Some of the frameworks such as, IoTivity and AllJoyn support a dual stack implemen-
tation, supporting a reduced functionality stack for constrained devices. However this
is not always the case, such as Xively, Cumulocity and ThingWorx who do not support
constrained devices and rely on intermediary agents or gateways to integrate resource
constrained devices. Security aspects are important for hardware requirements. Crypto
6. Conclusion 89
hardware support is required by IzoT, AllJoyn and IoTivity. Echelon’s IzoT platform
offers hardware components as part of their framework and also provides adaption layers
for non-IzoT devices. IBM supports its MQTT based framework with a dedicated server
appliance which runs the MQTT broker.
Rapid application development, (re-)configurability, scalability and deployment con-
siderations are important characteristics. It is difficult to make evaluation on such as-
pects, but it is worth mentioning frameworks with comparative strengths. IBM and Mi-
crosoft’s strong background in enterprise service bus means they have a good advantage
for scaling up as business needs grow. ThingWorx, Cumulocity and Xively demonstrate
strength in rapid application development and focus on value added work. Thread, Iotiv-
ity and AllJoyn tend focus on customers using commercial off the shelf devices and there-
fore simplify the deployment. Arrowhead’s strength is in its re-configurability, through
the use of dynamic orchestration of services and systems.
Further on governance and management of the devices, services and interfaces will
assist with rapid application development and maintenance. The cloud platforms such as
Cumulocity, ThingWorx and Xively offer great application governance and some device
management of an active device. However AllJoyn, IzoT platform and ThingSquare
offer good device management and lesser support for application governance. IBM and
Microsoft both have mature cloud application governance and management. Microsoft
has good device management through its embedded OS family and also it’s embedded .net
run time. Arrowhead provides application configuration and authorization governance
through its core services. Applications can discover services, download configurations and
authorize access. The primary purpose of LWM2M is the governance and management
of devices, at a scale for cellular operators. This tends to suggest it will have good
performance for large scale device networks.
6 Conclusion
As the market for IoT applications grows it industry has worked with academia to create
a standardized set of communications protocols. Next frameworks and platforms for the
90 Paper A
IoT are being developed by industrial consortia. This is in order to lay down a foundation
at the application layer which will enable deployments of large scale, either in instance
size or in instance number, IoT applications.
This survey has presented a number of commercially available frameworks and plat-
forms for developing industrial and consumer based IoT applications. The studied frame-
works have each approached IoT from the perspectives and priorities of their customer
needs. The priority was either on; centralizing distributed data sources for cloud-based
applications, referred to as a global cloud approach; or supporting integration of devices
for home(building)-automation, referred to as the peer-to-peer approach; or integrating
devices and clouds together for factory and industrial automation systems, referred to as
the local cloud approach.
A comparative analysis of the frameworks was conducted based on industry support,
use of standards based protocols, interoperability, security, hardware requirements, gov-
ernance and support for rapid application development. Based on this analysis academia
and industry can identify frameworks most suitable for their future projects and identify
gaps in the current frameworks.
Finally, for platforms and frameworks to succeed they must recognize and facilitate:
1. Enable devices, applications and systems to securely expose API’s for 3rd party
systems and to facilitate API management.
2. Enable systems to have protocol interoperability with other 3rd party API’s and
ensure they are extendable for new protocols.
The value of the whole is greater than the sum of its parts.
Acknowledgements
The Authors would like to thank AVL List GMBH for supporting this survey of com-
mercial frameworks for the Internet of Things.
References
[1] D. Evans, “The internet of things how the next evolution of the internet is changing
everything,” White Paper, Cisco, April 2011.
[2] L. Atzori, A. Iera, and G. Morabito, “The Internet of Things: A survey,” COM-
PUTER NETWORKS, vol. 54, no. 15, pp. 2787–2805, OCT 28 2010.
References 91
[15] Machine to machine (m2m) solution. Open Mobile Alliance. [Online]. Available:
http://openmobilealliance.org/about-oma/work-program/m2m-enablers/
[16] Smart energy profile 2 application protocol standard. ZigBee Alliance. [Online].
Available: http://splintered.net/z/Zigbee-smart-energy-profile-2.pdf
[19] Joy mining connected products success story. ThingWorx. [Online]. Avail-
able: http://www.thingworx.com/learning content/connected-products-success-
story-joy-mining-2/
[20] B. Dolin. Requirements for the industrial internet of things. Echelon. [Online].
Available: http://info.echelon.com/IIoT-Requirements-Whitepaper.html
[22] Transform business with intelligent gateway solutions for iot. Intel. [Online].
Available: http://www.intel.com/content/www/us/en/internet-of-things/gateway-
solutions.html
[23] Intel gateway solutions for the internet of things. Intel. [Online]. Available:
http://www.mcafee.com/us/resources/solution-briefs/sb-intel-gateway-iot.pdf
[28] Intelligent systems: A new level of business intelligence. Microsoft. [Online]. Avail-
able: http://www.microsoft.com/windowsembedded/en-us/intelligent-systems.aspx
[31] P. Oliphant. (2014, May) Alljoyn vs homeos for the connected home. [On-
line]. Available: http://www.connectedhomeworld.com/content/alljoyn-vs-homeos-
connected-home
93
[32] “What is ibm bluemix,” IBM Corp., accessed: 2018-04-19. [Online]. Available:
http://www.ibm.com/developerworks/cloud/library/cl-bluemixfoundry
[34] A. Trice. (2014, November) So, what is ibm mobilefirst? [Online]. Available:
http://www.tricedesigns.com/2014/11/19/so-what-is-ibm-mobilefirst/
[36] Ibm messagesight - ibm messaging. IBM Corp. [Online]. Available: https:
//developer.ibm.com/messaging/messagesight/
94
Paper B
Translation Error Handling for
Multi-Protocol SOA Systems
Authors:
Hasan Derhamy, Pal Varga, Jens Eliasson, Jerker Delsing, Pablo Punal Pereira
95
96
Translation Error Handling for Multi-Protocol SOA
Systems
Hasan Derhamy, Pal Varga, Jens Eliasson, Jerker Delsing, Pablo Punal Pereira
Abstract
The IoT research area has evolved to incorporate a plethora of messaging protocol stan-
dards, both existing and new, emerging as preferred communications means. The variety
of protocols and technologies enable IoT to be used in many application scenarios. How-
ever, the use of incompatible communication protocols also creates vertical silos and
reduces interoperability between vendors and technology platform providers. In many
applications, it is important that maximum interoperability is enabled. This can be for
reasons such as efficiency, security, end-to-end communication requirements etc. In terms
of error handling each protocol has its own methods, but there is a gap for bridging the
errors across protocols. Centralized software bus and integrated protocol agents are used
for integrating different communications protocols.
However, the aforementioned approaches do not fit well in all Industrial IoT ap-
plication scenarios. This paper therefore investigates error handling challenges for a
multi-protocol SOA-based translator. A proof of concept implementation is presented
based on MQTT and CoAP. Experimental results show that multi-protocol error han-
dling is possible and furthermore a number of areas that need more investigation have
been identified.
1 Introduction
The Internet of Things (IoT) has assisted in breaking down application domain silos and
promoting horizontal integration between application domains. The Arrowhead frame-
work, presented by Blomstedt et al. in [1] is looking to improve interoperability and
integrability of services provided by networked embedded devices. Cisco has estimated
that there will be 50 billion devices connected to the Internet by 2020 [2]. This is a
staggering number of devices and managing the differing communication standards is
not trivial.
The IoT area has seen many existing and new communications protocols emerging as
preferred standards. The adoption of the varied communication protocols can be linked to
specific application vertical requirements and is likely to stay this way as the IoT further
develops. Thus in order for the Arrowhead framework to provide interoperability between
application verticals, methods and technologies for communication protocol translation
are required.
97
98 Paper B
Some of the IoT protocols used in Service Oriented Architecture (SOA) based ap-
plications and systems are; Representational State Transfer (REST) over HTTP, eXten-
sible Messaging and Presence Protocol (XMPP), Message Queue Telemetry Transport
(MQTT), Constrained Application Protocol (CoAP) and OLE for Process Control - Uni-
fied Architecture (OPC-UA). Each of these protocols offers benefits in particular applica-
tion requirements, such as low-power operation, verbose headers and semantics, connec-
tion oriented messaging, decoupling producer from consumer, discovery, bootstrapping,
real-time or reactiveness, and statelessness.
For the automaton domain OPC-UA is the predominant SOA protocol when commu-
nicating from the Distributed Control System (DCS) or Supervisory Control and Data
Acquisition (SCADA) level and upwards in ISA-95 architecture. With the expectations
of IoT devices to be used in such ISA-95 architectures, it’s clear that IoT SOA proto-
cols like CoAP, XMPP, MQTT, and REST will show up together with OPC-UA and
legacy technology in large automation systems. Large EU projects like Socrates and
IMC-AESOP have published several papers on such architectures and migration to such
architectures [3][4][5][6][7][8].
Today, there is a variety of commercial IoT platforms which support interaction be-
tween different communication protocols. They offer an API for either; translation agents
[9], [10] running embedded on the device or in gateways or a cloud based software bus
[11], [12] for each protocol. This indicates that there is a need to integrate different
communications protocols.
However, these platforms either confine applications to adapters integrated into their
solutions, or require all communications be routed through a central server. Both ap-
proaches reduce flexibility for application designers and integrators, introduce security
vulnerabilities with untrusted third-party clouds. This creates inefficiencies in the com-
munications path and bandwidth usage for localized applications. Enabling protocol
interoperability by the use of SOA will increase design flexibility, enable local applica-
tions and remove dependency on third-party translators. But Quality of Service (QoS),
end-to-end connectivity, robustness and error handling become challenges which need
to be addressed. A literature search did not reveal much research in error handling in
multi-protocol translation. This indicates the need for more research in this area.
This paper investigates the question of error handling in a multi-protocol translation
for SOA systems. While in a single protocol system, errors are propagated according
to protocol specification. In the case of multi-protocol systems error handling becomes
more complex. In designing a SOA-based translator error handling and considerations
becomes critical to robust communication. An error in one protocol must be translated
to be understood by other protocols. While a SOA-based translator must also address
other aspects such as QoS, control messaging, security and semantic translation, these
are not considered in this paper and are considered future work.
This paper is structured as follows: Section II provides background and related work,
followed by problem definition in Section III and proposed solution in Section IV. An
example application scenario and implementation details and results are presented in
Sections V and VI. Finally, conclusions are summarized in Section VII, with suggestions
2. Background and Related Work 99
CoAP The CoAP protocol [19] has been developed by the IETF for use in extending
Internet capability down to resource constrained devices. It applies the request-response
communication pattern to a client-server network model. CoAP is targeting sleepy and
lossy networks in which supporting TCP becomes inefficient and power consuming [20].
It is based on UDP and provides an optional retry mechanism at the CoAP layer. It
has a RESTful API with the GET, PUT, POST and DELETE verbs supported with the
addition of the OBSERVE function. It creates a publisher-subscriber session between
a CoAP server and client, sending notifications either when resource state changes or
100 Paper B
periodically, on expiry of ’Max-Age’ [19]. Having this flexibility makes it an ideal choice
for machine-to-machine interaction.
MQTT The MQTT protocol has been developed for enabling efficient communication
between data sources and data sinks. It applies the publisher-subscriber pattern to a
client-server network model. It has recently been standardized by OASIS but has a long
history with IBM being used in sensor networks. Some of its features [21] are decoupling
data producer from data consumer through the centralized broker system; reduced header
size and event based publishing enable highly efficient communication; QoS levels with
message delivery; and, simple centralized security model with connection initiation by
clients enabling useful firewall and NAT traversal features.
3 Problem Definition
Dependent on how much the protocols overlap in the OSI layers there can be much
complexity in the translation process. Errors which can be detected and monitored need
to be handled in a manner which will enable adequate debugging and issue resolution,
either automated or by manual intervention.
In this section some of the challenges with handling errors of multi-protocol translation
are elaborated and specifically, the case of translation between MQTT and CoAP studied.
The error cases defined may not be exhaustive; however they represent some of the most
common and in some cases challenging errors.
Error cases Connection errors can occur when trying to establish new connections,
having a current connection lost for some reason, or inability to close a connection grace-
fully. These kinds of errors need to be detected and translated appropriately to ensure
efficient use of resources (at the end points as well as at the translator). Also infor-
mation about connection error events need to be made known for analysis on network
performance and identifying candidates for possible improvements.
Lossy communication errors are a real problem in wireless sensor networks (WSN),
which make up a good proportion of the future IIoT. The problem with lossy commu-
nication is made more complex with layers handling the issue at different layers of the
OSI stack. A higher level protocol may rely on a lower level layer to guarantee transport
while the target protocol may perform such transport checking itself. This means that
handling lossy communication at may need to go across layers or provide informational
error alert which will then rely on application layers to monitor and perform corrective
action if needed.
Response related delays and application introduced delays are two such categories of
delay related errors. Miss-matched timeouts at the communications or application layers
can lead to one sided timeouts. To handle one sided timeouts the channels need to be
re-synched, how will the translator deal with this? Application delays could be found
on resource constrained devices not being able to service a request or publish an update
within the time limit expected by the other party.
3. Problem Definition 101
Transient Error Cases Transient error cases can occur at end points but also in the
translator. This class of error cases relate to errors which can occur at any time and
will generally ’heal’ in a short period of time. They require special treatment in terms of
handling and translation, in terms of maintaining protocol behavior.
Transient errors due to resource usage occur when there is an increase in transla-
tion demand to such an extent that the translator is no longer capable of handling the
throughput. It must take remedial action in accordance with the protocols which are be-
ing translated. These actions are highly dependent on the source of the increased demand
and the capability for the translator to influence this demand. This can be illustrated
where two end-points have a contract for delivery of notifications on periodic updates.
If the size of the messages begins to increase while the rate of the messages remains at
a high frequency, the translation of these message packets may consume resources which
are not available. In this case the source of the data cannot be asked to reduce the
frequency as the contract is between the end points. In such a case the translator must
become a negotiating party and reduce the frequency or the size of the messages. In
some cases the protocol does not allow such freedom for negotiation and in fact requires
the end-point to drop the connection in such situations, as is the case for MQTT [21].
The second transient error case is from buffer problems. Buffer problems can occur
when resource usage increases without correct remedial actions. But in this case we are
referring to miss matches in buffer sizes between protocols and end-points. Whilst one
high performance end-point, such a REST based web application, may be able to handle
large verbose messages, once this is translated for a constrained end-point, such as CoAP,
an error will occur between the end-points and once again remedial action will need to
be taken and the event will need to be logged.
Lastly, transient errors can occur due to miss-matches in protocol or application
102 Paper B
4 Proposed Solution
There is much work to be done in order to address the challenges in the previous section.
As each challenge is tackled there is likely to be an increase in the solution complexity.
This section looks to address error code translation and discuss QoS and Error reporting.
Figure 1: MQTT and CoAP block diagrams. The bold lines show direction of error reporting
The MQTT protocol defines error codes for initial connection and subscription control
packets, other control packets do not have associated error codes [21]. The decoupled
nature of MQTT networks means that clients have much less visibility of errors occurring
in other clients.
Below are two tables with the error codes producible in MQTT and in CoAP. Table 1
shows the error cases which are generated by MQTT and mapped to CoAP. The mapping
in this case is used when a CoAP client is attempting to initiate a subscription to an
4. Proposed Solution 103
In this case error codes generated by the MQTT broker can be translated and passed
to the CoAP client. For example the CoAP client will be awaiting the response to its
GET request and if the MQTT broker does not allow the connection or the subscription,
then this error can be passed to the CoAP client.
In Table 2 the CoAP error codes are mapped to MQTT. These represent the case
when an MQTT client subscriber is attempting to retrieve data from a CoAP server.
This case is illustrated in Figure 3.
In this case there is no path for the error codes to be transferred to the subscribing
104 Paper B
MQTT client. This is due to the nature of the MQTT protocol; and so the CoAP error
codes do not have a mapping to MQTT clients.
However, the error codes can be used by the translator and can be mapped to transla-
tor behaviors. That is, for translating between two different protocol pairs, there will be
different behavior by the translator to take corrective actions or logging. This could be
translating the error code as in Table 1, or other remedial actions as defined in the trans-
lator. For the use case implemented in this paper the translator actions are described in
Section VI.
known at the network level. Besides handling loss, delay, delay and utilization metrics,
their more specified versions [22] should be kept under control: one-way and two way
throughput and delay, as well as their variance. Availability as a QoS metric is hard to
address other than binary terms - either it is available, or not. Loss as a quality metric
gets another meaning here - loss in translation - where not the whole message, but its
parts get lost. Depending on the context and the parameters that weren’t able to be
mapped, may lead to QoS degradation - or it may have no noticeable effect.
In this case CoAP is not suitable as a service consumer. A high level diagram of this
scenario can be seen in Figure 5.
The translator must connect the MQTT broker and the CoAP server to allow data
flow. There are many error conditions possible, as stated in Section III. As a proof of
concept the authors have chosen to detect a sensor disconnect at the wheel loader. In an
industrial environment such as mining, road works or construction sites a disconnection
error is something which the translator must be able to handle. The interaction diagram
between the different components is shown in Figure 6.
Of particular interest in this scenario is the nature of CoAP running on UDP which
is connectionless and therefore requires an agreed timeout based approach to connection
loss. While on the MQTT side TCP is used and therefore a connection state is maintained
between the end points. Even so not all TCP disconnections can be identified and so if
the detection of a disconnection is desired, a heartbeat or a keep-alive timeout is also
5. Implementation and Results 107
required.
The translator itself is not the core of this paper and so the implementation has
been kept to a simple transfer of payload from CoAP to MQTT. Semantics and other
protocol procedures have not been considered. Using a hub and spoke architecture for
implementation, results in a decoupled component based translator with only simple
object method calls being setup by the hub between the spokes. This can be seen in
Figure 8.
Running the experiment uncovered several error cases which were in addition to the
disconnect error case that was to be modeled. This unintended error case was very useful
as it shows a real world use case. The wheel loader sensor is a resource constrained
device running on a low power network and therefore notification timeout due to late
delivery or packet loss was common. In these cases the CoAP spoke would follow the
CoAP specification and on max-age expiry, would attempt to re-register the observation.
This was in almost all cases successful and would re-establish the periodic notifications.
However in the MQTT specification there is no mechanism within the protocol to pass
information regarding update timeout except by disconnection. The keep-alive timer was
controlled by the Paho library and so would keep the connection alive even when no data
was being sent. This means that a non-standard message would need to be sent from the
CoAP spoke to inform the MQTT client that the sensor has had a timeout. Processing
of this message would be at the discretion of the MQTT client application.
However, in the event of a disconnection error which does have protocol procedures
in both MQTT and CoAP there is still special behavior required. So the CoAP spoke
monitors the max-age of the last resource state update. If this max-age is exceeded then
a timeout is noted and the CoAP spoke will attempt to re-establish the observation, as
described earlier. However if the re-establishment is not successful then the CoAP spoke
cancels the observation and an error passes an error event to the translator hub. In a
normal setup an ungraceful disconnection detected by the MQTT system, would result
all subscribing clients being delivered a last will message, if it is available. However in
this case the MQTT system does not have visibility of the CoAP disconnection. It is
required to translate and notify the MQTT system of this disconnection event. The
proposed translator maps the CoAP disconnection to a last will message in the MQTT
side. This mapping takes place in the translation hub. Figure 9 shows the interaction
between internal components of the translator system.
In this way, the translator has made use of the protocol procedures of both sides
6. Conclusion 109
to make sure both protocols are aware of the error event. For interoperable use of the
translator between systems the definition of both the timeout event and the disconnect
event signals is a must. For this implementation two event signals were defined and
encoded in XML. These tags are presented below.
<s n=”e v e n t ” sv=” d i s c o n n e c t ”/>
<s n=”e v e n t ” sv=”t i m e o u t ”/>
Implementation of the rudimentary good path payload translation between CoAP
and MQTT was relatively trivial. However implementing the mapping of disconnection
and timeout errors introduced a lot of complexity to the code. By refining the implemen-
tation the error handling and mapping were moved to the central hub. This reduced the
complexity of the translation effort immensely. Each translation spoke did not need to
have knowledge of the other. This means that once a spoke is developed it can be con-
nected through the hub to any other spoke. This reduces the effort required to develop
translation services between the ever changing array of protocols.
By decoupling the protocol specific handling to spokes and translation aspects to
the translation hub has meant that the solution is extensible for new spokes and allows
interesting potential for a multi-spoke translator. It has also meant that error cases can
be handled in a standard manner within the translation hub and new spoke development
need only use available hooks in the translation hub in order to pass error conditions.
The results were promising with key advantages to the use of a hub and spoke SOA
based translator. Its active participation in the network, its simplicity for handling errors
and potential for extension to being orchestrated and also into semantic translation are
the main advantages.
6 Conclusion
This paper has presented the challenges and solution for error handling in multi-protocol
translation scenarios for SOA systems. This work is motivated by the creation of new
110 Paper B
7 Future Work
In the future, the architecture of the multi-protocol translator needs to be defined and
refined. Use case extension to other SOA protocols such as XMPP and REST will also
be needed.
Challenges are seen in orchestration and co-ordination of the translator end-points,
managing resource requirements, providing security in terms of privacy, confidentiality
and authenticity, and proving performance and flexibility gains.
Performance metrics, evaluation and bench-marking will be needed in order to prove
the advantages of a multi-protocol SOA translator. Further development of the semantics
used to send error information and signals should be looked into.
Error logging and diagnosis has much work to be done. Logging encompasses a larger
8. Acknowledgment 111
scope than just error events should enable SOA based applications to create an end to
end stream of events. An API must be developed with the ability for machine query and
manual query of the logs.
8 Acknowledgment
The authors would like to express their gratitude towards the European Commission and
Artemis for funding, and our partners within the Arrowhead project.
References
[1] F. Blomstedt, L. L. Ferreira, M. Klisics, C. Chrysoulas, I. M. de Soria, B. Morin,
A. Zabasta, J. Eliasson, M. Johansson, and P. Varga, “The arrowhead approach for
soa application development and documentation,” in Industrial Electronics Society,
IECON 2014 - 40th Annual Conference of the IEEE, Oct 2014, pp. 2631–2637.
[2] D. Evans, “The internet of things how the next evolution of the internet is changing
everything,” White Paper, Cisco, April 2011.
[18] C. Aoun and E. Davies, “Reasons to move the network address translator - protocol
translator (nat-pt) to historic status,” Internet Requests for Comments, RFC Editor,
RFC 4966, July 2007.
113
[21] “Mqtt version 3.1.1,” OASIS, October 2014. [Online]. Available: http:
//docs.oasis-open.org/mqtt/mqtt/v3.1.1/os/mqtt-v3.1.1-os.html
[22] P. Varga and I. Moldovan, “Integration of service-level monitoring with fault man-
agement for end-to-end multi-provider ethernet services,” IEEE Transactions on
Network and Service Management, vol. 4, no. 1, pp. 28–38, June 2007.
[23] M. Kovatsch, S. Mayer, and B. Ostermaier, “Moving application logic from the
firmware to the cloud: Towards the thin server architecture for the internet of
things,” in Innovative Mobile and Internet Services in Ubiquitous Computing (IMIS),
2012 Sixth International Conference on, July 2012, pp. 751–756.
114
Paper C
IoT Interoperability - On-demand
and low latency Transparent
Multi-protocol Translator
Authors:
Hasan Derhamy, Jens Eliasson, Jerker Delsing
115
116
IoT Interoperability - On-demand and low latency
Transparent Multi-protocol Translator
Abstract
In the Industrial Internet of Things there is a clear need for a high level of interoperabil-
ity between independently developed systems, often from different vendors. Traditional
methods of interoperability including protocol gateways and adapters, are often used at
the network layer. Recent work on application interoperability has emphasized the use of
middleware or protocol proxy/gateway. However, middleware tends to move the interop-
erability problem rather than solving it, and there are scalability issues with increasing
the number of proxies; re-configuration effort, and required bandwidth and processing
overheads.
This paper proposes a secure, on-demand and transparent protocol translator for
the Industrial Internet of Things. Targeting the challenge of interoperability between
IP-based communication protocols, the paper analyses current solutions and develops a
set of requirements to be met by IoT protocol interoperability. The proposed protocol
translator is not a middleware, it is a SOA-based participant, it is used on-demand when
needed, it does not introduce design time dependencies, it operates transparently, it
supports low-latency, and it is secured through the use of Arrowhead authorization and
authentication.
1 Introduction
The Internet of Things (IoT) is a large and heterogeneous collection of networks, devices,
developers, owners, users and stakeholders. Advances in low cost processors have been a
key enabler of intelligent automation devices. IoT takes the next step of networking these
devices, resulting in intelligent environments. With the heterogeneity of independent
stakeholders a plethora of protocols have been developed. Many of the protocols will
never be known as they are proprietary. But even within standardized protocols there
is a large variety to choose from. They are the result of evolving requirements and
technology. Leading to a highly dynamic ecosystem of co-existing protocols unable to
work with each other. Interoperability in such an ecosystem is a major challenge, and
yet it is a crucial aspect of successful IoT.
This challenge has motivated a large body of research in both academia and indus-
try . One such ambitious project is Arrowhead [2]. With 79 partners from industry
and academia, its grand challenges are ”enabling the interoperability and integrability
of services provided by almost any device”. Arrowhead envisions that Service-Oriented
117
118 Paper C
is often protocol dependent and when translating between protocols the SLA must be
preserved or relevant changes notified to consumer and provider systems. These require-
ments are described in Table 1.
The next section will present the related work in detail before briefly introducing the
Arrowhead framework in Section 3.
2 Related Work
Middleware is a common approach to addressing interoperability, some examples are
uMiddle [9], starlink [7], INDISS [8] and UIC [15]. In UIC client systems are required
to implement parts of the middleware in order to interact with the middleware infras-
tructure and each other. On the other hand, uMiddle and INDISS require no changes
to the client systems they are bridging, so they are transparent to the client. As these
solutions have focused on networked homes, they are oriented toward protocols such as
Service Discovery Protocol and Simple Service Discovery Protocol. To use in wider IoT
areas, there are problems with security and scalability. Furthermore, the middleware so-
lutions have moved the interoperability challenge so that they bridge some protocols, but
would themselves require bridging between each other. In-fact both starlink and uMiddle
claim to address interoperability issues between existing middleware, while themselves
introducing another layer of middleware.
ESB is a form of middleware infrastructure used in SOA-based enterprise systems. It
is often used in highly controlled or static environments where service composition is the
primary challenge. It provides an intermediate protocol, for example as in [13], to which
other service based protocols are translated. For dynamic systems or systems consisting
of many independent device owners, these solutions incur heavy configuration and delay
costs.
Protocol translation has been the tradition of network layer protocols and research
has been on-going since very early networks pre-dating the Internet. A detailed survey
3. Arrowhead Framework 121
of the problem of protocol conversion was presented by Green in [16]. In this work Green
describes in great detail the challenges of protocol conversion and presents a structured
approach toward converter creation. Green’s work on protocol conversion carries concepts
and ideas which can be applied to current networks.
Kenneth et al. in [17] lay the ground work to use formal methods in protocol con-
version. This work utilized two formal methods; Conversion via projection and by finite
state machine, presented by Okumaru. This fundamental work has led to further, more
recent, work in the area. Sinha et al. in [18] worked toward a formal method of syn-
thesizing protocol converters for use in System on-a-chip(SoC). Utilizing CTL Module
Checking they were able to generate verifiable converters capable of buffering signals and
handle behavioral protocol mismatches.
Also, relevant to this work is protocol translator generation. Work in this area fol-
lows very closely on from formal methods, automated converter generation is presented
by Liu et al. in [19] and Bromberg et al. in [20]. They argue that manual generation
of translation schemes is error prone, costly and introduces implementation delays when
new protocols are introduced. Automatic generation of translators is a huge cost reduc-
tion, but requires specification in a Domain Specific Language (DSL). This issue is not
addressed in this paper.
Protocol proxying is a well known method of creating interoperable applications.
For example an HTTP-CoAP proxy mapping [21] was proposed even while the CoAP
protocol was still in early draft form. There have been several works undertaken to prove
the implementation and usability of HTTP-CoAP proxies including work by Lerche et
el. in [10] and Castellani et al. in [11]. In [10] the forward proxy required that the HTTP
client be configured to pass the CoAP URI in the query path of the proxy request. In
[11], in-addition to the forward proxy, a reverse proxy was also utilized, which requires
the CoAP server nodes to register their routes with the proxy. After this configuration,
the rest of the operation can be made transparently between the protocols. In order to
remove even this, [10] suggests that an interception proxy is usable, but not desired as
it requires further network configuration. The protocol proxy does not act as a formal
middleware, but in most configurations is an always on translator. Meaning that even
when translation is not required the message will transit through the proxy and incur a
certain delay.
3 Arrowhead Framework
The Arrowhead framework [22] provides the operational environment for the translator.
Arrowhead exploits existing open standards for communication and security. There are
three core functions it provides, service registry, orchestration, and authorization. These
are detailed in the next sub-sections. Directly related to the challenge of interoperability
is the SOA principle ”Standardized Service Contract” [23]. Arrowhead uses the document
structure shown in Figure 1. In the center Service A Contract defines Service A which is
exchanged between System P and System C. The four documents AS D, AI DD, AC P and
AS P are presented by Blomstedt et al. in [24] and are described in Table 2. As defined
122 Paper C
by the Object Management Group, the SoaML [25] specification has participant entities
which provide and consume services. Within the context of the Arrowhead framework a
system is the participant entity.
3.2 Authorization
Authorization is a core service which allows service providing systems to delegate respon-
sibility of access control to a centralized system. This allows constrained systems a high
4. Proposed SOA-based translator 123
level of security without introducing security processing overhead in the service exchange
[26].
3.3 Orchestration
Within the context of SOA-based software systems, orchestration is the process of find-
ing the optimal service provider for a target consumer. For example; a thermostat for
a room within a building will need to consume a temperature service provider. Orches-
tration should be able to identify a suitable temperature service provider, such that the
thermostat is able to function appropriately.
For example 4 protocols will require 6 translators and 5 protocols will require 10 and
so on. This is illustrated in Figure 2-a.
To reduce the number of translators, an intermediate protocol can be selected from
the existing protocols. All protocols will be translated to the intermediate protocol, and
the number of translators will be equal to n-1. But as shown in Figure 2-b a peering of
nodes with protocols A and E, will use two translators to communicate. This results in
increased information loss and delay.
To address this issue the proposed translator utilizes an intermediate format, not
protocol, shown in Figure 2-c. The intermediary format is not sent on the wire and can
capture all protocol specific information. The information loss is reduced to that of a
single translator, from intermediate format to target protocol. There is a lower parsing
requirement going from protocol to intermediate format and so the translation delay is
reduced to that of a single translator.
The proposed translator is located within local clouds of IoT Things. Figure 3 shows
a cloud servicing an industrial site, while another cloud has mobile things and a charging
station. In these cases, the translator is positioned along side the Arrowhead core services.
The translator will utilize information regarding the service contract (detailed in Table
2), to identify mismatches and what is required to resolve these mismatches. For the
protocol translator, only the Communication Profile is relevant, as this is what captures
all protocol related information. But the rest of the document structure will be utilized
as the translator is extended with translation between different semantics and security
technologies.
Referring back to Arrowhead service contract, it can be said that the proposed trans-
lator resolves differences in the contracts between a service provider and consumer. This
interoperability scenario is shown in Figure 4. System P honors contract Service A and
System C honors Service A*. The difference between service contracts is in the commu-
nication protocol and possibly the interaction pattern. The translator must honor both
contract A and A* and bridge the two service contracts.
The translator is transparent to both systems in the service exchange which means
that the translation mechanism does not rely on any in-band information beyond than
the normal service contract. The translator therefore is provided out-of-band information
in order to correctly honor both Service A and Service B contracts. This out of band
4. Proposed SOA-based translator 125
The Translation Service interface receives information regarding the service contracts
to be honored and extracts the required protocol information. Part of the request also
includes the addressing information of the service provider instance. There is further
126 Paper C
5 Translator Architecture
In this section the proposed translator has been implemented. It is an important contri-
bution as it serves as a proof of extendability and guide for refining the translator.
Each protocol is implemented as a service provider, server, and a service consumer,
client. The overview block diagram of the translator architecture follows in Figure 6. The
proposed translator has been implemented in Java. The translation system is composed
of a single translator service instance and the translation service initializes and managers
translation hubs. Each translator hub has two base spokes, which are initialized according
to the requested protocol configurations.
The translator service and hub are aimed at being kept simple and lightweight. The
sophistication of the translator is in the spoke implementations. Almost any protocol
library could be used to implement the protocol spokes. Evolution and extension to the
spoke capability should not impact the core of the translator or other spoke implemen-
tations.
the body of the packets; payloadFormat is the format of the payload; exception is the
error code.
5.4 Security
IoT security offers many challenges in its own right. The protocol translator does not
need to interrogate service payloads, and so end-to-end encryption of the payload is
possible. The question of authenticity and authorization is addressed by handshaking
with the Arrowhead authorization system, as shown in operations b, e and g of Figure 9.
128 Paper C
6 Transparency
Within the context of this paper, transparency is from the perspective of the application
systems. That is, application system designers should not be concerned with protocol
mismatches or how they are resolved.
The proposed transparency works in conjunction with Arrowhead orchestration de-
scribed Section 3.3. Figure 9 shows a sequence diagram of a translation. After Arrowhead
system start-up, in 9−a the translator receives a service request and sets up a transient
translation hub. In 9−b the hub creates two protocol spokes and retrieves temporary
credentials from the Arrowhead authorization system. Next in 9−c it sends the response
with the address information of the provider spoke. This could be an MQTT topic +
broker, XMPP chat room + server or CoAP/HTTP URL. 9−d initiates the service ex-
change. System A is authenticated in 9−e. The translated request is now sent to the
target system 9−f . Prior to servicing the request, the translator is authenticated and
authorized by the authorization system. Finally the service response is translated back
to the origin. As can be seen from this diagram, the orchestrated ”consuming” system
is not required to take any actions outside normal Arrowhead boot-strapping.
Following the principles of SOA-based design, the translator system is autonomous
and so not coupled to the orchestration system. The translator system monitors and
cleans up redundant translator hubs. Garbage collection of disused translators is handled
by an internal spoke watchdog which will notify inactivity after a timeout. The time is
configurable in the translator service request generated by the orchestrating system. The
translator may receive duplicate requests for the same translation pairing. De-duplication
130 Paper C
is handled by the translator system and rather than creating a new translation hub
instance, the existing one will be sent in response.
7 Testing
The delay introduced by the translator has been measured and evaluated against the
Californium proxy. A control test with a single protocol scenario was also run. These
three setups are seen in Figure 10. Test setup 10-a has an CoAP request generated from
Firefox Copper plug-in and sent directly to the CoAP sensor, of course via a 6LoWPAN
border router. Setup 10-b generated an HTTP request from Firefox HttpRequester plug-
in to the translator, which then generated the corresponding CoAP request to the CoAP
sensor. Setup 10-c followed on from setup 10-b except that it utilized the Californium
Proxy rather than the SOA Translator. Both translator and proxy were run on the same
BeagleBone Black (BBB) hardware.
The hardware is setup with the Mulle sensor node running a ContikiOS CoAP Server
offering two services; Firstly, the Wheel Loader service measures the vibration, rotation
and temperature of a ball bearing on a Volvo wheel loader, the service endpoint is ad-
dressed as coap://[sensor-ip]:5683/wheel loader. The payload structure can be seen in
Figure 11 and has a length of approximately 375 bytes. It is transfered in two CoAP
blocks and is transmitted as a confirmable response.
The Power service measures the sensor nodes power status. The payload can be seen
in Figure 12 and is approximately 202 bytes. It is also a confirmable response, and is
sent in a single CoAP block.
The BBB is running Debian Wheezy Linux distribution and is powered and tethered
to a Laptop via a USB slave. The USB Network Adapter (usb0) is used and has static
IP Addresses 192.168.7.2 and FDFD:55::80FE. IPv6 packet forwarding has been enabled.
The border router of the 6LoWPAN network is connected to the BBB through Contiki’s
tunslip6 program. Therefore there is full IP connectivity from the test pc to the sensor
node on the 6LoWPAN network. This hardware setup is shown in Figure 13.
There are three Putty sessions connected to the BBB for monitoring and control of
the applications. In order to monitor the delay introduced by the translator, timing
measurements are scoped to a single time domain on the BBB.
A network traffic monitor tcpdump is running on the BBB and is used to take time
stamps on the external usb0 network interface and the 6LoWPAN tun1 network interface.
The network time stamps provide two round trip times (RTT) which, when subtracted,
isolates the time spent between the BBB’s network interfaces.
In the case of the proposed translator, there are 4 additional time measurements which
help to analyze the translation delays. The timers t1 , t2 , t3 , t4 are Java nanosecond timers,
with millisecond resolution. All time measurements from Figure 13 are listed in Table 4.
The calculated times in the lower part of the table are: T1 is the RTT through usb0; T2
is the RTT through tun1; T3 is the duration that the packet is held within the BBB.
The tests were carried out 20 times per service request per scenario. The results
are averaged and can be seen tallied in Table 5. The CoAP only scenario shows the
7. Testing 131
optimal performance with only very small IPv6 forwarding delay in the BBB. While the
Californium Proxy had the largest delays on the BBB, with averaged delay introduced
by the proxy between 146 ms and 177 ms. The translator showed better performance
with shorter delays of between 50 ms and 77 ms.
Shown in Figure 14 is a stacked line graph of the translator timing for the power
service request. The lower line is the time spent in the HTTP library, the middle line
7. Testing 133
is the time spent in the translator application, and the top line is the time spent in the
CoAP library.
Figure 14: Proposed translator delay timing for the /Power request
Except a few outliers, there is little variability in the translator timing. The translator
hub and spokes are currently very simplistic, and most of the processing will be in the
protocol libraries, handling new requests and tracking responses. Shown in Figure 15 is a
similar graph for the wheel loader request. There is much greater variability in the timing
for this request. The CoAP library seems to be responsible for much of the variability.
This is likely to be related to the 2-block transmission and the CoAP library handling
this differently to the first case.
The CoAP library tends to consume a larger proportion of the processing time than
expected. This could be because the CoAP library uses many threads [28] and so perhaps
running on a single core processor introduces overhead in thread context switching.
The next two Figures 16 and 17 compare the RTT of the three scenarios. It can be
seen in the Power request graph (Figure 16) that the CoAP only transmission is very
consistent. The Californium Proxy varies over the requests while the translator has a
few outliers but otherwise stays within a narrow band of +/- 40ms.
In the case of the Wheel Loader service request the control time is increased from
just above 125ms to close to 250 ms. The Californium Proxy seems to settle down after
the first 7 to 10 requests. Removing this settling time will bring timing very comparable
to that of the translator.
It is likely that the difference in timing between the Californium Proxy and the pro-
posed translator is due to the complexity of the implementation. The proxy is constructed
7. Testing 135
Figure 15: Proposed translator delay timing for the /Wheel Loader request
to be able to handle a request from any HTTP client and to then translate and forward to
any CoAP server. Also, the forwarding address of the proxy uses must be passed in-line
in the HTTP path. The proxy must manage many in-flight messages sent to and from
many end-points. Where as the translator has a predetermined server and is expected to
be used by only client. It does not need to process addressing information while trans-
136 Paper C
lating. Another difference is the proxy also handles caching which the translator does
not. Still the proxy is a direct protocol-to-protocol translator and so is optimized for that
translation. While the proposed translator is built to translate between any protocol to
any protocol, dependent only on a protocol spoke being available.
8 Conclusion
IoT interoperability is a challenge that is being addressed widely. The limitations with
current solutions is reliance on permanent middleware or highly configured networks/ap-
plications. This results in scalability and performance issues. They create operational
silos requiring specialized integration points. Moreover, the IoT domain imposes new
requirements to the application of interoperability solutions, as described in Table 1.
This paper addresses this challenge by proposing an on-demand and transparent
multi-protocol translator for SOA-based IoT applications. Operating alongside a SOA-
based orchestrator, the proposed translator is used on-demand for definite protocol mis-
match. The orchestrator composes the service exchange through the translator resulting
in zero application configuration.
The proposed translator architecture splits a protocol into two spokes, a service
provider spoke, and a service consumer spoke. The translator can handle many pro-
tocols as each protocol implements just two spokes. The spokes integrate through an
intermediate format. This allows any combination of protocol spokes. However translat-
ing from publish-subscribe to request-response patterns demands pro-active spoke imple-
mentation. The MQTT spokes actively pull and push messages, imitating the original
9. Future work 137
interaction pattern..
Testing showed that the proposed translator performed as well, if not better than a
common protocol proxy, the Californium proxy. The proposed translator introduced on
average a 50 ms delay for a single block CoAP request + response and 77 ms delay for
a two block CoAP transmission. This difference is caused by the CoAP library handling
multi-block messages. This result demonstrates that the intermediate format does not
introduce more delay than the direct protocol-to-protocol translation. Which means that
the proposed translator demonstrates satisfactory performance and a minimum increased
latency.
The paper also proposes security and reporting aspects of the translator, however
these are not yet implemented. The translator actively participates in security hand-
shakes resulting in fine grain access control and authentication. The translator utilizes
event reporting for active fault detection and support diagnostics. The translator follows
SOA principles and can operate both autonomously, or as a traditional gateway.
9 Future work
Future work involves developing one-to-many protocol fan-out, multiple translator or-
chestration and translating encoding and semantics.
The current hub only supports two spokes. An enhancement would be to support a
one to many fan out. This would enable multiple service consumers of different protocols
to access a single service provider.
The current translator operates as a single translator within a local cloud. It is
desirable to deploy multiple translators to a single local cloud to help with load balancing
and reducing communication latency by selecting a nearby translator.
As stated in interoperability requirements translating encoding, compression and se-
mantic differences must be addressed. Once this is achieved, only the abstract service
description needs to be common between two service contracts in order to achieve inter-
operability.
Finally, the proposed security, reporting and formal specification of the translator
needs to be implemented.
References
[1] H. Derhamy, J. Eliasson, J. Delsing, and P. Priller, “A survey of commercial frame-
works for the internet of things,” in 2015 IEEE 20th Conference on Emerging Tech-
nologies Factory Automation (ETFA), Sept 2015, pp. 1–8.
[3] S. K. Noh and S. H. Lee, “An implementation of gateway system for heterogeneous
protocols over atm network,” in Communications, Computers and Signal Process-
138 Paper C
ing, 1997. 10 Years PACRIM 1987-1997 - Networking the Pacific Rim. 1997 IEEE
Pacific Rim Conference on, vol. 2, Aug 1997, pp. 535–538 vol.2.
[4] E. Benhamou and J. Estrin, “Multilevel internetworking gateways: Architecture and
applications,” Computer, vol. 16, no. 9, pp. 27–34, Sept 1983.
[5] K. L. Calvert and S. S. Lam, “Adaptors for protocol conversion,” in INFOCOM
’90, Ninth Annual Joint Conference of the IEEE Computer and Communication
Societies. The Multiple Facets of Integration. Proceedings, IEEE, Jun 1990, pp. 552–
560 vol.2.
[6] P. Dhar, R. C. Ganguly, S. Das, and D. Saha, “Network interconnection and protocol
conversion-a protocol complementation approach,” in TENCON ’92. ”Technology
Enabling Tomorrow : Computers, Communications and Automation towards the
21st Century.’ 1992 IEEE Region 10 International Conference., Nov 1992, pp. 116–
120 vol.1.
[7] Y. D. Bromberg, P. Grace, and L. Réveillère, “Starlink: Runtime interoperability
between heterogeneous middleware protocols,” in Distributed Computing Systems
(ICDCS), 2011 31st International Conference on, June 2011, pp. 446–455.
[8] Y.-D. Bromberg and V. Issarny, “Indiss: Interoperable discovery system
for networked services,” in Proceedings of the ACM/IFIP/USENIX 2005
International Conference on Middleware, ser. Middleware ’05. New York, NY,
USA: Springer-Verlag New York, Inc., 2005, pp. 164–183. [Online]. Available:
http://dl.acm.org/citation.cfm?id=1515890.1515899
[9] J. Nakazawa, H. Tokuda, W. K. Edwards, and U. Ramachandran, “A bridging frame-
work for universal interoperability in pervasive systems,” in Distributed Computing
Systems, 2006. ICDCS 2006. 26th IEEE International Conference on, 2006, pp. 3–3.
[10] C. Lerche, N. Laum, F. Golatowski, D. Timmermann, and C. Niedermeier, “Con-
necting the web with the web of things: lessons learned from implementing a coap-
http proxy,” in 2012 IEEE 9th International Conference on Mobile Ad-Hoc and
Sensor Systems (MASS 2012), vol. Supplement, Oct 2012, pp. 1–8.
[11] A. P. Castellani, T. Fossati, and S. Loreto, “Http-coap cross protocol proxy: an
implementation viewpoint,” in 2012 IEEE 9th International Conference on Mobile
Ad-Hoc and Sensor Systems (MASS 2012), vol. Supplement, Oct 2012, pp. 1–6.
[12] Mule esb, enterprise service bus, open source esb, mulesoft. MuleSoft. [Online].
Available: https://www.mulesoft.com/platform/soa/mule-esb-open-source-esb
[13] Ibm mq. IBM Corp. [Online]. Available: http://www-03.ibm.com/software/
products/en/ibm-mq
[14] Artix - micro focus. Micro Focus International plc. [Online]. Available:
https://www.microfocus.com/products/corba/artix.aspx
References 139
[17] K. L. Calvert and S. S. Lam, “Formal methods for protocol conversion,” IEEE
Journal on Selected Areas in Communications, vol. 8, no. 1, pp. 127–142, Jan 1990.
[18] R. Sinha, P. S. Roop, and S. Basu, “A module checking based converter synthesis ap-
proach for socs,” in VLSI Design, 2008. VLSID 2008. 21st International Conference
on, Jan 2008, pp. 492–501.
[23] T. Erl, SOA Design Patterns, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall
PTR, 2009.
[25] “Service oriented architecture modeling language 1.0.1,” Specification, May 2012.
[Online]. Available: http://www.omg.org/spec/SoaML/1.0.1/
140
[26] P. P. Pereira, J. Eliasson, and J. Delsing, “An authentication and access con-
trol framework for coap-based internet of things,” in Industrial Electronics Society,
IECON 2014 - 40th Annual Conference of the IEEE, Oct 2014, pp. 5293–5299.
[28] M. Kovatsch, M. Lanter, and Z. Shelby, “Californium: Scalable cloud services for
the internet of things with coap,” in Internet of Things (IOT), 2014 International
Conference on the, Oct 2014, pp. 1–6.
Paper D
Orchestration of Arrowhead
services using IEC 61499:
Distributed Automation Case
Study
Authors:
Hasan Derhamy, Dmitrii Drozdov, Sandeep Patil, Jan van Deventer, Jens Eliasson and
Valeriy Vyatkin
141
142
Orchestration of Arrowhead services using IEC
61499: Distributed Automation Case Study
Hasan Derhamy, Dmitrii Drozdov, Sandeep Patil, Jan van Deventer, Jens Eliasson and
Valeriy Vyatkin
Abstract
1 Introduction
Modern concepts like Internet of Things, Cyber-physical systems and Industry 4.0 are
defining the shape of future industrial automation systems. These concepts use state
of the art technologies like low-power wireless communication, web services and low-
cost embedded devices. This, with increasing capabilities of computer networks and
embedded microcontroller devices, enables for example enhanced distributed monitoring
and control in factory automation applications.
As far as software development for distributed automation concerns, the currently
dominating paradigm of programmable logic controllers and their software framework
defined by the IEC 61131-3 standard are showing lots of limitations. There are several
approaches in software and systems engineering aiming at addressing the new challenges.
The IEC 61499 standard [1] has been developed as one such effort in the automation
community. It aims at a gradual extension of PLC capabilities and their convergence
with distributed systems. The use of service-oriented architecture in automation was
first proposed by Jammes and Smit [2] who outline opportunities and challenges in the
development of next-generation embedded devices, applications, and services, resulting
from their increasing intelligence. The work plots future directions for intelligent device
143
144 Paper D
Loose coupling, since software modules provide services to other modules they are
designed in a relatively generic format. Communication between components is
asynchronous and only done when required.
Potential gains of using SOA are claimed as cost reduction, potential to hire less skilled
labour, interoperability (cross-platform and cross-company) and implementation speed.
There is increasing number of use-cases demonstrating that functionality of complex
automation systems can be implemented in a distributed way. In [4, 5], it was shown
that functionality of material handling systems can be fully implemented via collaborative
effort of decentralised controllers embedded to basic devices, such as conveyor sections.
In [6, 7, 8, 9] the combination of IEC 61499 with SOA is comprehensively presented,
while [10] demonstrates the use of web-services in IEC 61499 applications.
This paper is addressing the challenges arising in application of a relatively ma-
ture SOA-based and automation-oriented platform Arrowhead for development of high-
integrity automation applications that include a good deal of distributed control in ad-
dition to the traditional data acquisition.
The results of this paper are threefold. Firstly, we describe main features of Arrow-
head framework and IEC 61499 architecture in the context of a use case that deals with
implementing distributed control of a manufacturing system. Using a combination of Ar-
rowhead services and IEC 61499 function blocks. Secondly, a framework for integration
of IEC 61499 with SOA-enabled IoT is described. Finally, test results from a prototype
implementation are presented.
The rest of this paper is structured as follows: Section 2 presents a brief overview of
the IEC 61499 standard. Section 3 provides an overview of the Arrowhead Framework.
Case study setup is discussed in section 4. Section 5 presents elements of the case study
automation implemented with the Arrowhead architecture, and Section 6 demonstrates
the use of IEC61499 for coordination and orchestration of Arrowhead services at the
system level. Finally, the paper is concluded in Section 6.
2 IEC 61499
The IEC 61499 [1, 11] is an international standard that introduces an open architecture
for distributed control systems, which is an important class of embedded systems with
2. IEC 61499 145
sd [Interaction] Arrow head Core Services w ith tickets [ Arrow head Core Services w ith tickets ]
2: GET: services
3: GET: services
4: services: address.port.path
5: services: address.port.path
loop
[0,*]
opt
6: GET: Ticket for service
[Ticket expired or about to expire]
7: Ticket
alt
[Valid ticket]
13: service
a strong legacy background. The standard is called function block architecture after its
main design artefact that is an event driven (and event activated) function block. If one
146 Paper D
would abstract out unnecessary details, the standard introduces quite an elegant model
of distributed application that is a network of function blocks connected via control and
data flows. The control flow is modeled using the concept of event that is emitted from an
output of one function block and can be received at one or several inputs of other function
blocks. The most essential claim of the IEC 61499 architecture is about minimizing
developers’ efforts in deploying automation software to different distributed architectures
of hardware, which is attributed to the event-based communication mechanism.
3 Arrowhead
Arrowhead is an open source framework for industrial SOA-based applications. It has
been designed to not interfere in the basic control operation of industrial systems. The
grand challenges of the Arrowhead framework are to enable interoperability and integra-
bility of services produced and consumed by any device. To achieve this it defines three
core services which enable discovery, security and composition. Further support core ser-
vices are defined which enabled advanced system management, Quality of Service (QoS)
and service provisioning. In Figure 1 a SysML diagram models the interaction sequences
between application systems and the core systems. All Arrowhead SOA-based services
will follow a very similar line of interaction, only the systems hosting the services would
change depending on desired architecture.
Arrowhead defines systems as the participant entities of the SOA. The systems then
communicate via service contracts. Thus a service provider instance or consumer in-
stance must be associated with a distinct system. A service contract is defined by four
documents, Service Description, Interface Design Description, Communication Profile
and Semantic Profile. The details of these documents can be found in Arrowhead Wiki
[12]. Following the core functionality of the framework is described.
Service Discovery Based on DNS-SD the service discovery core service has three func-
tions publish, un-publish and query. When a system comes ´online´ it must publish
all service providers which it hosts, and un-publish once the service is no longer available.
The query function is used by systems to locate services available for consumption.
4. Case study: Conveyor Loop with Distributed Control 147
station once all processing associated with the pallet is complete. Additionally, IEC 61499
application preferred a request-response pattern. This meant a poll based web service
structure would be followed, which identified two services defined as the palletAvailable
and processComplete services.
processComplete The process complete service is responsible for operating the actua-
tion mechanism and allows the pallet to move on to the next station. It is not responsible
for deciding when processing is complete, rather it receives this indication from the sta-
tion. This service is also implemented with CoAP using coap://[ipv6]:5683/palletAvailable
URL. However, this service is awaiting a POST from the service consumer to indicate
processing is complete.
5. Application of Arrowhead architecture 149
{
” bt ” : 0 ,
”e ” : [
{”n ” : ” b a r c o d e ” , ” v ” : 9 } ,
{”n ” : ” a v a i l a b l e ” , ” v ” : 0 } ,
{”n ” : ” l o a d e d ” , ” v ” : 0 } ,
{”n ” : ” m cnt ” , ” v ” : 0}
]
};
to the IEC 61499 preferring HTTP. The translator is transparently injected so that inte-
grating the two systems is transparent to protocols. By using Arrowhead Orchestration
and Translation systems, the lead time for change is reduced and the integration process
is simplified. Association between systems is made through evaluation of semantic meta-
data describing the services provided or consumed by the systems. Figure 5 shows the
result of orchestration rule look-up, followed by service discovery to satisfy the rules and
then compiled as a response.
{
” t a r g e t ” : ” s t a t i o n −01” ,
” services ”: [
{
”name ” : ” palletAvailable ” ,
” address ”: ”[ fdfd : : f f ]:5683/ palletAvailable ”
},
{
”name ” : ” processingComplete ” ,
” address ”: ”[ fdfd : : f f ]:5683/ processingComplete ”
}
]
}
Figure 7: IEC 61499 implementation of the transport system with arrowhead services.
Barcode is the tray number read from arrived tray (the parameter is valid only when
the tray is available)
TrayLoaded is the data from work piece sensor on loading/unloading point; true when
there is a work piece at corresponding point on the tray
7 Conclusion
In this paper, work progress towards integration of IEC 61499 standard and Arrowhead
IoT services for flexible manufacturing applications is presented. Among usual benefits,
the approach described in this paper allows the future usage of formal verification [17] for
service oriented control applications. The use of Industrial Internet of Things technologies
enables multi-year lifetime when sensing digital inputs, and events normally has a delay
of tens of ms from when a digital signal changed state to when an event was received by
the gateway. Future work may include an integrated tool development for configuration
of distributed automation systems combining Arrowhead and IEC 61499.
Acknowledgment
This work was supported, in part, by the Arrowhead project funded by Artemis PPP
and Vinnova.
References 153
References
[1] “IEC 61499-1: Function blocks-part 1 architecture,” International Standard, First
Edition, Geneva, vol. 1, 2005.
[2] F. Jammes and H. Smit, “Service-oriented paradigms in industrial automation,”
Industrial informatics, IEEE transactions on, vol. 1, no. 1, pp. 62–70, 2005.
[3] A. Cannata, M. Gerosa, and M. Taisch, “Socrades: A framework for developing
intelligent systems in manufacturing,” in Industrial Engineering and Engineering
Management, 2008. IEEM 2008. IEEE International Conference on. IEEE, 2008,
pp. 1904–1908.
[4] J. Yan and V. Vyatkin, “Distributed execution and cyber-physical design of baggage
handling automation with iec 61499,” in Industrial Informatics (INDIN), 2011 9th
IEEE International Conference on. IEEE, 2011, pp. 573–578.
[5] ——, “Distributed software architecture enabling peer-to-peer communicating con-
trollers,” Industrial Informatics, IEEE Transactions on, vol. 9, no. 4, pp. 2200–2209,
2013.
[6] W. Dai, V. Vyatkin, J. H. Christensen, and V. N. Dubinin, “Bridging service-
oriented architecture and IEC 61499 for flexibility and interoperability,” Industrial
Informatics, IEEE Transactions on, vol. 11, no. 3, pp. 771–781, 2015.
[7] W. Dai, J. Peltola, V. Vyatkin, and C. Pang, “Service-oriented distributed control
software design for process automation systems,” in Systems, Man and Cybernetics
(SMC), 2014 IEEE International Conference on. IEEE, 2014, pp. 3637–3642.
[8] W. Dai, J. H. Christensen, V. Vyatkin, and V. Dubinin, “Function block imple-
mentation of service oriented architecture: Case study,” in Industrial Informatics
(INDIN), 2014 12th IEEE International Conference on. IEEE, 2014, pp. 112–117.
[9] W. Dai, V. Vyatkin, and J. H. Christensen, “The application of service-oriented
architectures in distributed automation systems,” in Robotics and Automation
(ICRA), 2014 IEEE International Conference on. IEEE, 2014, pp. 252–257.
[10] E. Demin, V. Dubinin, S. Patil, and V. Vyatkin, “Automation services orchestration
with function blocks: Web-service implementation and performance evaluation,” in
Workshop on Service Orientation in Holonic and Multi-Agent Manufacturing (SO-
HOMA’15), 2015.
[11] V. Vyatkin, “IEC 61499 as enabler of distributed and intelligent automation: State-
of-the-art review,” Industrial Informatics, IEEE Transactions on, vol. 7, no. 4, pp.
768–781, 2011.
[12] “Arrowhead framework wiki,” May. 2016. [Online]. Available: https://forge.soa4d.
org/plugins/mediawiki/wiki/arrowhead-f/index.php/MainPage
154
[13] P. Punal, J. Eliasson, and J. Delsing, An Authentication and Access Control Frame-
work for CoAP-based Internet of Things. IEEE, 2015, pp. 5293–5299.
Authors:
Jan van Deventer, Hasan Derhamy, Khalid Atta and Jerker Delsing
© 2016, The 15th International Symposium on District Heating and Cooling, Reprinted
with permission.
155
156
Service Oriented Architecture Enabling The 4th
Generation Of District Heating
Jan van Deventer, Hasan Derhamy, Khalid Atta and Jerker Delsing
Abstract
1 Introduction
District Heating is a common sense concept that becomes progressively more evident as
one reflects on it. It is a centralize production of heat energy that is stored in a fluid and
distributed through a network to different buildings within a district. It is a resourceful
idea because the centralized production is more efficient than several distributed small
ones as long as the distribution losses are small [1]. The concept becomes even more
indisputable when the heat energy is a byproduct of an industrial process. For example,
a paper mill, which produces paper, can sell off its incidental heat. Another great example
is the Combined Heat and Power (CHP) that produces electricity from burning refuse
and simultaneously heats buildings and associated domestic hot water. This is a win-win
solution for all, including the environment. And yet, it can further be improved.
In its Energy Roadmap 2050, the European Union has set itself the impressive long-
term goal of reducing greenhouse gas emissions by 80-95% as compared to 1990 levels by
157
158 Paper E
2050. In its commitment to reach that goal, it has considered different scenarios and their
impacts. Connollly et al. assert that increased use of district heating in Europe supports
the goals of the roadmap at a lower cost [2]. This continued use of district heating is
accompanied with its normal evolution, which brings it to its fourth generation.
Lund et al. present an excellent review of the four generations of district heating
and of the European Union’s energy roadmap leading to the fourth generation [3]. They
describe the trends over the generations: increase in energy efficiency, decrease in tem-
perature of the supplied transport medium and an increasing complexity in stakeholders.
The first generations period was from about 1880 to 1930 with transport medium temper-
atures below 200°C. The second generation from 1930 to 1980 with temperatures greater
than 100°C and third from 1980 to 2020 with temperatures less than 100°C. The fourth
generation should span from 2020 to 2050 with temperatures less than 70°C. The com-
plexity increase includes an increase of different producers, which at some time might be
heat consumers, as well as daily variations of heat demand.
To address this complexity increase and aim for its harmonious operation, it becomes
essential to understand the district heating system. The 4GDH is made up of several
heat production systems, distribution systems, and consumer systems. Each of them with
their own set of sub-systems. The natural temptation is to designate the district heating
concept as a system of systems, but the Maier argues that it might be a misclassification
[4]. According to Maier’s tenets, an equivalent term to system of systems would be
“collaborative systems“ where the sub-systems fulfill valid purposes in their own rights
and continue to operate to fulfill those purposes if disassembled from the overall system.
Additionally, the sub-systems must be managed (at least in part) for their own purposes
rather than the purposes of the whole. Aligning to these views, serves not only the sub-
systems, but also 4GDH and forms the point of the herewith article. Especially when the
implementation specifications of the 4GDH do not yet exist. This leads Maier to state
that systems-of-systems are largely defined by their interface standards rather than their
structures. He points to the Internet as a good example of collaborative systems. It is
that same technology that is used here to define the interface standard of the proposed
solution. This enables new emerging behaviors to surface up as the 4GDH becomes
mature because the services within the architecture are loosely coupled and therefore
late binding.
This article presents a systems architecture for the 4GDH knowing that the systems
specifications will not all be complete for a long time. This plan of action relies on
two frameworks and their related interfaces: the Arrowhead Framework and the OPTi
Framework. The OPTi Framework offers a simulation platform for different components
of the DH systems that can includes the components of the 4GDH, which enables the
optimization of the 4GDH’s dynamics. The Arrowhead Framework enables system inte-
gration and collaboration through Service Oriented Architecture (SOA). We use Model
Based System Engineering (MBSE) to convey how this is implemented in a structural
and behavioral sense.
The structure of the article presents the two frameworks before turning to models
of district heating enhanced with SOA. The models are based on uses the MBSE tool
2. The Frameworks 159
SysML. The models describe district heating’s structures at the concept level down to
the sensors and actuators within a district heating substation where we apply the SOA
technology based on the open source Arrowhead Framework. The models additionally
describe the behavior of the system with focus on service discovery, authorization, and
orchestration at the lowest level to clearly demonstrate the mechanisms involved prior
to scaling back up to the whole system of systems in which the OPTi Framework is an
integral part.
2 The Frameworks
If systems are expected to be independent, and yet be collaborative to form a system
of systems, they must somehow find a benefit to co-operate. The OPTi Framework
addresses the yearning for collaboration, and the Arrowhead Framework addresses the
service oriented architecture to make the collaboration possible. Coincidentally, the
OPTi Framework can provide services within the Arrowhead Framework. This is possible
because they both have well defined interfaces.
Figure 1: A block diagram depicting OPTi Sim with its different simulation engines.
different component suppliers have chosen different Internet protocols, which could hinder
collaboration due to dialects [11].
To elucidate how these core services are used in district heating, we employ systems
engineering models
core services in a local cloud. Figure 6 shows the outdoor temperature server presenting
its temperature service to the service registry. It does that by POSTing, using the World
Wide Web’s representational state transfer (REST) style, the following message:
{
”name ” : ” temperature −em219 ” ,
” type ” : ” temp−j s o n −coap . udp ” ,
” h o s t ” : ” [ f d f d : : d f 5 : 8 c6a : 5 ca2 : 4 4 a6 ] ” ,
” port ” : 5683 ,
” properties ”: {
” property ”: [
{
”name ” : ” v e r s i o n ” ,
” value ”: ”1.0”
},
{
”name ” : ” path ” ,
” v a l u e ” : ”/ t e m p e r a t u r e ”
}
]
}
}
With this posting, the service provider system tells its name, its chosen communica-
tion semantic, and the path to the service it offers. Additional services would be listed
in the property array. By default or from an installation update, the service provider
knows to register its services at the gateway host (host address and port).
Figure 4: A district heating substation with a wireless sensor network, where each ”S” is a tiny
wireless server.
164 Paper E
Returning to figure 6, the service registry then builds a database of available services
from all possible providers. Arrowhead’s service registry is based on DNS-SD, which is
an extension to DNS. DNS, domain name system, part of the Internet Protocol Suite’s
application layer, provides the address at which a service is hosted. The service discovery
extension, DNS-SD, provides the ability to discover services, which an application might
want to use.
Figure 5: The gateway composed of a BeagleBone, a Mulle, along with the core services and an
application.
A service consumer, e.g., the DH application, then can ask the service registry for
the desired service and receive back the address of the service provider with the path
to the service of interest. The service consumer directly contacts the service provider
whenever it wants the needed information, also referred to as “pulling“. The answer
from the service provider is:
{
e : [{
”n ” : ” urn : dev : mac : 0 0 2 4 b e f f f e 8 0 4 f f 1 ” ,
” t ”:1425256855 ,
”u ” : ” Cel ” ,
”v ” : 2 3 . 5
}]
}
with its name, the timestamp of measurement, along with the unit and measured
value. Alternatively, after being requested by a consumer, the service provider could
“push“ an update at specific time interval or upon some agreed event. The Arrowhead
Framework does not restrict pull and push behaviors.
With service discovery recounted, several natural questions surface up and include
the following. What is the latency in the control loop? How is security assured both for
privacy protection and against tampering? How are service selections sorted out in the
case of multiple providers and how is provider drop out handled? How is the connection
to the Internet handled?
The Arrowhead project has addressed such questions by first defining local clouds.
The service consumers do not interact with the service providers through a cloud far away,
3. Structure and Behavioral Models 165
rather through a local cloud, e.g., the gateway. This assures the low latency necessary in
control application and offers some security from the outside world. Additional security
is handled by the authorization service, which could use certificates or tickets. Figure
7 shows the sequence diagram using the Arrowhead Framework core services. If the
service provider is power constrained, it would prefer to receive an authorization ticket
with a limited lifetime to communicate with a given consumer to avoid wasting power
by obtaining an authorization at each request. The logic is that the consumer obtains
a ticket from the authorization service, which it passes on to the service provider. The
service provider then checks if it is valid with the authorization service and if so, then
will communicate with the consumer for the lifetime of the ticket.
The third core service proposed by the Arrowhead project is the Orchestration service.
Its purpose is to give some “intelligence“ in the selection of service provider or recommend
an alternative provider when a specific one has dropped out. If the application itself is
enhanced with this ability, it does not need the orchestration, but this would mean that
the application is quite specific and not general, leaving the choice to a system architect.
166 Paper E
Figure 7: Sequence diagram with all three Arrowhead Framework core services.
For example, if the outdoor sensor is offline, the orchestration service could infer the
outside temperature from the heat meter’s primary supply temperature or go beyond the
local cloud.
The Arrowhead Framework has a collection of services such as the Gatekeeper to
interact safely with the Internet or the Historian to log data. With service providers like
the Gatekeeper, the Arrowhead Framework permits collaboration between local clouds to
build a system of systems where the OPTi Framework entices collaboration as optimized
performance benefit all stakeholders. The Gateway can then join other clouds, e.g.,
a district heating cloud, to provide and consume services in the same manner as is
done in the local cloud. This notion ensures scalability of the systems; a concept the
4. Conclusion 167
Internet has already proven. It also addresses possible failure points, e.g., if some parts
of the communication network would be unreachable, the OPTi framework could, as a
service, provide estimated values of the current state within the DH network. SOA, being
based on the same architecture as the World Wide Web, empowers the transition in and
evolution of the 4GDH with security and ease of maintenance.
4 Conclusion
Using MBSE and SysML, we have modeled a district heating’s structure and zoomed
down into a substation. The substation incorporated very small wireless sensor and ac-
tuator nodes that were web servers offering services. Using SysML’s sequence diagrams,
we have illustrated the message exchange between service providers and consumers in
partnership with the Arrowhead Framework core services. The advantages with the Ar-
rowhead Framework include properties such as loosely coupled modules (i.e. two SOA
systems do not need to know about each other at design time to allow a runtime data
exchange), late binding (i.e. the exchange of data between two systems is established at
runtime), autonomy, pull and push behavior (i.e. data can be requested or sent with-
out request upon predetermined conditions), several standardized SOA protocols, data
structures, information semantics, and data encryption. As we zoom out from the district
heating substation to the distribution network and production plants, we use the OPTi
Framework to address scalable data management, macro and micro simulations, which
can be applied to manage and optimize the 4GDH while we point to new collaborative
and emerging behaviors of a system of systems. We contend that the use of these two
frameworks support the transition into, as well as the growth of, the 4GDH.
5 Acknowledgment
The OPTi project has received funding from the Horizon 2020 Research Programme of
the European Commission under the grant number 649796. The Arrowhead project is a
part of Artemis Innovation Pilot Project. ARTEMIS stands for “Advanced Research and
Technology for Embedded Intelligence and Systems“- and is the European Technology
Platform for Embedded Computing Systems.
6 References
References
[1] S. Frederiksen and S. Werner, District Heating and Cooling. Studentlitteratur AB,
2013. [Online]. Available: https://books.google.se/books?id=vH5zngEACAAJ
Combining district heating with heat savings to decarbonise the {EU} energy
system,” Energy Policy, vol. 65, pp. 475 – 489, 2014. [Online]. Available:
http://www.sciencedirect.com/science/article/pii/S0301421513010574
[5] A. Gylling. Optimization of district heating & cooling systems. [Online]. Available:
http://www.opti2020.eu
[8] P. Punal, J. Eliasson, and J. Delsing, An Authentication and Access Control Frame-
work for CoAP-based Internet of Things. IEEE, 2015, pp. 5293–5299.
[9] P. Varga and C. Hegedus, “Service interaction through gateways for inter-cloud col-
laboration within the arrowhead framework,” 5th IEEE WirelessVitae, Hyderabad,
India, 2015.
Authors:
Hasan Derhamy, Jesper Rönnholm, Jerker Delsing, Jens Eliasson, and Jan van Deventer
171
172
Protocol Interoperability of OPC UA in Service
Oriented Architectures
Hasan Derhamy, Jesper Rönnholm, Jerker Delsing, Jens Eliasson, and Jan van Deventer
Abstract
1 Introduction
Internet of Things (IoT) communication protocols cover highly varied application do-
mains. Trying to select a single communications protocol reduces flexibility and removes
possibility for leveraging domain specific benefits of a certain protocol. Therefore, to be
able to operate in a multi-protocol ecosystem a form of translation is required. Rather
than a dedicated middleware or network proxy, Derhamy et al. propose an ”on-demand”
translation service. Following Service Oriented Architecture (SOA), this translation ser-
vice is injected between a service exchange when a mismatch is detected by an Orches-
trating entity.
Object linking and embedding for Process Control - Unified Architecture (OPC UA)
is an industrial communication framework that is highly promoted for integrating dis-
tributed systems. While it has many promising features which we describe in section 2,
there are still situations where other communication protocols have benefits. Primarily
in constrained environments protocols such as Constrained Application Protocol (CoAP)
is well suited for battery powered and processor constrained applications. But also, as
communications protocols change and improve, solutions must adapt to the new tech-
nology. Hence, a successful interoperability solution must work with existing and future
protocols.
173
174 Paper F
There is existing work towards migrating applications from existing protocols and
frameworks to OPC UA and on integrating OPC UA with other protocols. Some of
these works are presented in the next section.
MQTT-client APIs.
HyperUA [10] is another proprietary solution for accessing OPC-UA servers from web
clients. An HTTP server provides a gateway for web clients to address HyperUA nodes,
references, monitored items, servers and subscriptions. The gateway handles sessions to
the OPC UA servers and translates Hyper nodes to OPC-UA nodes. In order to interact
with an OPC UA server, clients must implement the HyperUA API.
The primary problem with interoperability solutions such as gateways and protocol
adapters is that they require custom configuration per site, and do not scale well with
each additional protocol added to the mix. As the complexity of a centralized gateway
grows, the solution becomes more brittle and resists change.
1.2 Contribution
OPC UA applications have not been used alongside standard web applications without
knowledge of OPC UA. As stated in the related works previous efforts with HTTP and
CoAP have been to use them as a transport for regular OPC UA communications.
This paper proposes an OPC UA translator that works with standard IoT protocols
such as HTTP, CoAP and MQTT. Enabling access to OPC UA nodes from non-OPC
UA based IoT applications. The paper presents:
The paper, in Section 2 introduces the OPC UA protocol and high level aspects
relevant for translation. Following this, Sections 3 and 5 present the proposed solution
and the use case. Finally, a discussion and conclusion are in Sections 5 and 6.
relationships, an OPC UA client can access node and reference data to gather its own
understanding of any OPC UA information model.
Interactions between an OPC UA client and server is standardized through a set
of services provided by the server. The services allow access to and management of
nodes: 1) management of node and information model; 2) read and write data to nodes,
both query and subscription based; and 3) establishment of communication channels to
perform further requests. A server can choose to support only a subset of the full service
set, this is determined by using a server profile. OPC UA can be used as a data oriented
historian. Clients store data within a defined information model. Upper layer functions
can then access the data stored in an asynchronous manner.
In the next section the interoperability solution is proposed and the fundamental
architecture described.
When a Create request is passed to the OPC UA spoke, the node id is stored in the
URI and the information for creation is stored in the payload. Only the AddNodes service
is mapped to the Create operation. This is where the OPC UA specific information leaks
into the non-OPC UA protocol.
When a Read request arrives at the OPC UA spoke, it could be either a read ser-
vice request or a browse service request. These services are differentiated based on the
presence of an attribute index in the URI path. If no attribute index is present, the
translator spoke performs a Browse on the node.
An Update corresponds to 3 services that modify an existing node: Write, AddRefer-
ences and DeleteReferences. References are not addressable and belong to a source node.
Hence, the translator takes the node id in the URI as the source node, and requires
further information in the payload. Both deleting and adding references are treated in
178 Paper F
the same manner, except that the payload information can differentiate the operation.
In order to perform a write operation on a node, an attribute index must be supplied
in the URI. The translator addresses the attribute of the node directly. The payload of
a write operation is treated as a serialized string, hence it can be JSON, XML, CBOR,
EXI, or etc. This is the most common approach for IoT protocols such as CoAP, HTTP,
MQTT and XMPP.
A Delete operation is translated as a DeleteNodes service request. Removing the node
corresponding to the node id stored in the URI. The request will also delete all references
of the target node.
<IP >:<port >/<path >/<ns-idx >/<name >/<attr -idx >/<arr -idx >
Figure 2: The general URL structure to address an OPC UA node. Attribute-index can be
added for attribute-granularity, and array-index can be added for accessing single elements in
arrays stored under attributes.
To reduce the non-OPC UA path parametrization when translating to OPC UA, the
URI format uses string for the node name format. The standard node-set, in the OPC
Foundation name space, use numeric node names, so for this name space the node names
are interpreted as numeric. Hence, nodes with name space index 0 will have their name
interpreted as a numeric type.
A reference is identified in a triple: the source node, target node and reference type.
The proposed translator addresses references according to a source node described in the
URI. The target node and reference type is provided in the payload.
is just as applicable. JSON has certain advantages over XML, in this case the reduced
overhead in the encoding and readability.
When writing to a single node, update operation, the request payload format is shown
in Figure 3. The node can be accessed down to the variable attribute, which is addressed
in the URI. The value is the only parameter in the payload and can be passed straight
through to the OPC UA domain without modification.
When performing a read, the same URI format is used. During a read operation, the
payload is not modified, it is serialized as a JSON object or XML document and given to
the intermediate format. In the case of browse operation the attribute index is omitted
from the URI. Payload was not modified because, ideally, the protocol translator should
not get involved in semantic translation. A semantic translator would be responsible and
could work in tandem with the protocol translator. However full semantic translation is
outside the scope of this work.
On the other hand node and reference creation requires additional information in the
payload. The structure of a payload is shown in Figure 4.
In order to enable node and reference instantiation a formulation for node- and refer-
ence representation was provided for parametrization. Figure 5 shows the add reference
payload format, the source node is defined in the URI structure and the reference be-
longs to this node. The ”forward” parameter specify in what direction non-symmetrical
reference will point their semantics. This was one of the areas where OPC UA API must
have representation within the payload structure.
Figure 6 has the format for deleting a reference. It follows the same format as the
add reference. The difference is in object name, ”deleteReference”, and the lack of the
nodeclass parameter, which is only needed when adding references. The deleteReference
180 Paper F
4 Application Scenario
The proposed solution can be used as a middleware solution with permanent presence
in the communication path. Another approach is to provide the translator as a service
which dynamically can be invoked based on need. In the here described application sce-
nario, the translator is a service. The SOA-based Arrowhead Framework [13] enables
dynamic provisioning and composition at run time. Leveraging these features the pro-
posed translator can be invoked dynamically based on the current orchestration of service
consumption. In this way the Arrowhead Framework supports multi-protocol System of
Systems. This may become a key enabler to Industrial Internet of Things.
At the core of the Arrowhead framework are three mandatory core services, these are:
1. ServiceRegistry - provides a store for publishing service provider presence and per-
forming look-up.
Arrowhead also defines support core services. These services are used only when needed.
The Multi-protocol translator is one of these support core services. The proposed solution
extends the existing translator with the OPC UA protocol.
In the context of the proposed solution the Orchestration is responsible for, firstly,
identifying a protocol miss-match between a potential service provider and consumer.
Secondly, it issues provisioning requests to the translator, to which the translator in-
stantiates two endpoint spokes which match the protocols of each service provider and
consumer respectively. The detail is described in Section 3.1
The Arrowhead framework documentation architecture defined by Blomstedt et al.
in [14] describes three levels: services, systems and systems-of-systems. Of interest to
translation; the service layer documentation captures 1) abstract service description,
2) interface design, and 3) protocol and semantic profiles. These artifacts are used to
identify the protocol miss-match between matching services.
A pilot scenario from the Arrowhead project has been selected to demonstrate the
proposed solution. The scenario involves monitoring the vibrations and heat produced
by ball bearings on a Volvo wheel loader. Volvo wheel loaders are expensive machinery
which must be maintained in order to extend lifetime. By monitoring the ball bearing it is
possible to measure wear and schedule preventative maintenance. In addition, monitoring
the rotations of the wheel, it is possible to ascertain the performance of the ball bearing
and find early end of life.
In order to extract information at such a granular level from the industrial machinery,
IoT technologies offer a promising choice. IoT based ball bearings is more than simply
connecting the Wheel loader to the Internet. Industrial environments require a solution
182 Paper F
which is robust to communication interruption, long service life and able to scale up to
many thousands of devices within a single solution. This means, that the solution needs
low-power operational modes and intelligent, automated, re-configuration.
Constrained Application Protocol (CoAP) is a preferred protocol for low-power en-
vironments, offering support for RESTful interface design [15]. In this use case a CoAP
based ball bearing sensor is running a client as a service consumer. It seeks-out a ser-
vice provider where it is able to push sensory data. Arrowhead orchestration is used to
find such a service provider which matches the needs of the ball bearing sensor. The
Orchestration has been loaded with a description of the requirement for the sensor. The
requirement is such:
1. If available, use the head-office historian,
2. else, use a local historian to temporarily store data.
The Systems of Systems (SoS) Description has a head office historian and a local historian.
If Internet connection is available, then the head office historian is used, otherwise the
local historian is used. When the Orchestration service is used, it will return either the
local or the remote historian. This decision is based on availability, the remote service
provider is dependent on an Internet connection. However, the head office historian must
integrate with many more systems than the simply the ball bearing sensor. It has been
implemented with OPC UA, perhaps according to company policy. Figure 7 shows the
SoS with an Internet connection and Figure 8 without an Internet connection.
Figure 7: System of System diagram when Internet connectivity with VPN is possible
In this case we have an interoperability issue between the ball bearing and the remote
historian. It would be possible to implement a local gateway which caches the data and
forwards to the remote historian. However, this gateway would require implementation,
testing and configuration effort. Furthermore, the local historian in this use case supports
CoAP. So, in the local connection there is no interoperability issues. The proposed
solution is used here as an ”on-demand” protocol bridge when communicating with the
head office, while saving resources when direct communication is possible between sensor
and local historian.
4. Application Scenario 183
Figure 11: CoAP payload example and structure, three nodes being updated in a single CoAP
message
5 Discussion
The translator has attempted to reduce leakage of OPC UA semantics across to native
web protocols. As with a translation between MQTT and HTTP where a path in HTTP
represents a topic in MQTT, a path in HTTP represents a node in OPC UA. However
the path structure must conform to a particular structure. By utilizing common methods
between HTTP and CoAP with OPC UA, the OPC UA protocol spoke does not require
additional semantic information within the non-OPC UA interface definition.
In this translation, the core features of the OPC UA information model have been pre-
served. Namely, it is possible to address individual nodes and to browse nodes. Therefore
it is possible to explore the address space and to read and write to nodes.
However, OPC UA supports 37 different services while the proposed solution has only
mapped 7 services. The supported service set provides a bare minimum interoperability
for non-OPC UA systems to interact with OPC UA servers. In this way payload interface
6. Conclusion 185
requirements made on the non-OPC UA systems is kept to a minimum. In fact for read
and write, only the addressing scheme of the service interface is impacted This impact
is also kept to a RESTful style, thereby would not be unnatural to any REST API
developer. It could be possible to include the full set of OPC UA services within the
URL, as path variables or URL query parameters. This would however break RESTful
design approach by including operation information in the address space. It would also
apply further interface design considerations on the non-OPC UA systems.
A service, in SOA concept, provides a functionality. This is beyond the interaction
services defined in OPC-UA. A call made to a service is expected to result in some value
added result. Perhaps the concept of the method object in OPC UA could map well to
a functional service. So an OPC UA server would provide services such as ”sound fire
alarm”, ”turn off lights” or ”store wheel loader data”. It is these services which should
be registered in a generic service discovery methodology such as DNS-SD would allow all
systems (OPC UA or not) to discover services provided by an OPC UA server.
6 Conclusion
Communication in Industry 4.0 requires flexible interconnections between integration
layers and information layers. The OPC UA communication framework shows potential
for such interconnections. Convergence to a single protocol is possible, but it is likely
that a multi-protocol communication network will exist for some time.
The here proposed translation service enables interoperability between OPC-UA and
other SOA protocols supporting integration of legacy automation system with upcoming
IoT devices. Such integration is supported by integration platforms like the Arrowhead
Framework. Frameworks such as Arrowhead enable dynamic provisioning and composi-
tion of the solution into the service exchange communication path. The proposed solution
allows for OPC UA translation mapping to be defined once, and then used for CoAP,
HTTP and MQTT protocols.
The translation preserves management, browsing and read/write of OPC UA nodes.
Because of the complexity of the OPC UA service interface, the translation does not
cover all functions of OPC UA. The mapping from OPC UA address space, requires that
nodes are addressed in the URL path or topic name in a specific manner. The name
space index and node name in the path are required to address the node. Combining
this with standard CRUD methods, a subset of OPC UA services can be invoke from a
generic non-OPC UA service interface.
7 Future work
Future work involves working translation back from OPC UA to HTTP/CoAP/MQTT.
This would enable OPC UA clients access to non-OPC UA services.
Incorporating OPC UA server discovery into a generic service discovery is also re-
quired for full look-up and runtime binding routines to be performed.
186 Paper F
Investigating the possibility of expanding the supported OPC UA service set while
maintaining a minimum requirement on service interfaces implemented with non-OPC
UA protocols.
An interesting new protocol for IoT is gRPC. Developed by Google and used within
their microservices infrastructure, it would be interesting to investigate its suitability for
interoperable use with OPC UA. It would enable cloud based service interaction with
automation services on the factory floor.
Acknowledgment
This work is supported by the EU ARTEMIS JU funding, within project ARTEMIS/0001/2012,
JU grant nr. 332987 (Arrowhead).
References
[1] G. Cândido, F. Jammes, J. B. de Oliveira, and A. W. Colombo, “Soa at device level
in the industrial domain: Assessment of opc ua and dpws specifications,” in 2010 8th
IEEE International Conference on Industrial Informatics, July 2010, pp. 598–603.
[2] T. Sauter and M. Lobashov, “How to access factory floor information using internet
technologies and gateways,” IEEE Transactions on Industrial Informatics, vol. 7,
no. 4, pp. 699–712, Nov 2011.
[3] M. J. A. G. Izaguirre, A. Lobov, and J. L. M. Lastra, “Opc-ua and dpws interop-
erability for factory floor monitoring using complex event processing,” in 2011 9th
IEEE International Conference on Industrial Informatics, July 2011, pp. 205–211.
[4] S. Grüner, J. Pfrommer, and F. Palm, “A restful extension of opc ua,” in 2015 IEEE
World Conference on Factory Communication Systems (WFCS), May 2015, pp. 1–4.
[5] ——, “Restful industrial communication with opc ua,” IEEE Transactions on In-
dustrial Informatics, vol. 12, no. 5, pp. 1832–1841, Oct 2016.
[6] P. Wang, C. Pu, and H. Wang. (2017) Opc ua message transmission method over
coap 01. [Online]. Available: https://tools.ietf.org/html/draft-wang-core-opcua-
transmission-01
[7] ——. (2016) Requirement analysis for opc ua over coap. [Online]. Available:
https://tools.ietf.org/html/draft-wang-core-opcua-transmition-requirements-00
[8] Kepware. (2017, May) Kepserverex. [Online]. Available: https://www.kepware.
com/en-us/products/kepserverex/documents/kepserverex-manual.pdf
[9] ——. (2017, May) Kepware iot gateway. [Online]. Avail-
able: https://www.kepware.com/en-us/products/kepserverex/advanced-plug-ins/
iot-gateway/documents/iot-gateway-manual.pdf
187
Authors:
Hasan Derhamy, Jens Eliasson, Jerker Delsing, and Jan van Deventer
189
190
In-network Processing for Context-Aware
SOA-based Manufacturing Systems
Hasan Derhamy, Jens Eliasson, Jerker Delsing, and Jan van Deventer
Abstract
To achieve flexible manufacturing, increasingly large amounts of data are being gener-
ated, stored, analyzed, archived and eventually fed back into the product life cycle. But
where is this data stored and how is it transported? Current methods rely on centralized
or federated databases to manage the data storage. This approach has several challenges,
such as collection bottlenecks, secure retrieval, single point of failure and data-scheme
fragility as data heterogeneity increases. Additionally, manufacturers are finding the
need to open their networks for service based equipment suppliers. This means previous
security assumptions regarding network encryption and information access-control must
be re-evaluated.
Proposed here is a method of in-network processing that gathers information only
where and when it is needed. Systems build context at runtime by creating dynamic
queries which make service composition. The service composition processes raw data and
presents it as information to the calling system. This reduces the movement of data/in-
formation and removes single point collection bottlenecks. Furthermore, fine grained
access control and shared trust can be granted between untrusted systems. The pro-
posed methods are demonstrated on a lab setup of an industrial use case.
1 Introduction
Industry is demanding more Flexible, Efficient and Sustainable (FES) methods for man-
ufacturing [1]. Factory digitization was seen as the next step forward. ISA 95 assisted
factories in moving data between the factory floor and enterprise systems. However, due
to pure vertical integration [2], information flow and communication in the horizontal
and diagonal directions was not considered. In this pursuit, the Industry 4.0 initiative
has generated the Reference Architectural Model for Industry 4.0 (RAMI 4.0) and the
I4.0 component model [3]. They envisage communication between smart equipment and
products that are wrapped in administrative shells. The FES gains require a change to
many security assumptions and security objectives. This is to enable communication,
co-operation and information flow between multi-stakeholder Systems of Systems. Infor-
mation is a key enabler to finding efficiency gains and improving sustainable processes.
Raw data is gathered from IT systems and factory automation systems and stored
in databases. Raw data has limited value and must be processed before its value can
be realized. However processing the data into useful information can be costly and
191
192 Paper G
time consuming. Furthermore, even once improvements are identified, the software and
automation systems are often brittle and resist this change.
Flexible manufacturing means that a production line is able to adapt rapidly to
changes in the production process. This means that a customized order of lot size 1 is
a unique product/variant. So a single production line must handle multiple products or
variants at once. Managing the order/specification information and then disseminating
it to the right automation system at the right time requires significant synchronization.
1.1 Gap
Current solutions to data extraction, analysis and dissemination either suffer from cen-
tralized bottlenecks and single points of failure or fixed/rigid middleware. An information
exchange solution must conform the requirements:
Table 1: The requirements for information exchange in next generation industrial manufactur-
ing.
Requirement Description
Dynamic rout- Systems a moving, changing
ing of data be- and adapting, so static routing
tween systems of data is not flexible.
On-demand Mass collection/processing of
access to data is not efficient and se-
information cure. On-demand means in-
formation made available only
when and where it is needed.
Controlled Granular access rights and
distribution ownership of data/information
of data and must be supported.
information
Existing solutions using centralized data collection and processing or enforced mid-
dleware lack flexibility and are brittle to change.
The proposed solution is dynamic in nature, supports privacy-by-design and fine-
grained access control through supporting systems. The proposed solution presented
in Sections 3 and 4 enables context-aware industrial systems by allowing information
query from SOA-based networks without reliance on centralized data collection. This is
implemented in a lab setup of an industrial use case, condition monitoring of a wheel
loader ball bearing.
2. Background 193
1.2 Contribution
Here we propose an information extraction architecture that provides in-network pro-
cessing for context-aware systems. It builds on SOA-based principles. To achieve this
there are three main challenges:
2. Orchestration method to parse the interest and identify the correct composition
This paper tackles the first and third challenge. The second challenge is to be tackled
in future work.
The first challenge requires a method of expressing interest for information. An
interest must be able to express the required type of information (indicating the source
of the data) and the functional transformations of data. We propose to use Cypher [4],
an SQL inspired, graph query language. Section 2.3 details the expression of interest
Dynamic services are proposed to assist with standard data transformations. This
includes thresholding trigger, data aggregation, Filter and etc. Dynamic services are
service instances which are provisioned based on composition instructions. Section 4
describes the details.
Orchestration for SOA-based systems builds on concepts introduced by Arrowhead
Framework [5]. This paper assumes that composition facility is available.
The following sections will introduce the background to the importance of information
exchange and its application and the related work.
2 Background
2.1 Context-awareness
Context-aware computing starting in the 1990s focused primarily on providing location
dependent information to mobile users. According to Abowd and Dey [6] context can be
defined as:
“Context is any information that can be used to characterize the situation of an
entity. An entity is a person, place, or object that is considered relevant to the interaction
between a user and an application, including the user and applications themselves”.
At its essence context-aware computing involves systems modifying their behavior
based on changes to context over time. It is primarily used to decide what information
to present to a user. For example a shopping list system may remind a user about the
shopping list as they travel toward or past a supermarket. In this case, the system
monitored the users context.
However context-aware computing when applied to Industrial Internet can enable
advanced machine-to-machine behavior changes. Location is one aspect for a mobile
system within a factory floor. However, a full list of relevant context might incorporate:
194 Paper G
1. Production recipe,
2. Available machinery,
3. Available material
5. Specifications,
7. Location.
A context-aware industrial system interacts with other systems without user inter-
action. In this machine-to-machine environment, context moves in both vertical and
horizontal manners. Overall production objectives and methods are based on planning
and production recipes. While devices and product operation will generate context mov-
ing upward.
Context-aware systems decide the order of process execution based on available ma-
chinery. They do not begin dangerous processes while safe operation is not possible. A
customer specification decides the quality and customizes a product recipe. Environ-
mental factors affect quality, a context aware system do not begin a process before the
environmental requirements are satisfied.
Many automatic control systems such as ABS braking react to input and could there-
fore be considered context-aware systems. These embedded systems fit the pervasive
ubiquitous notion. They are single purpose tailored systems. However, these systems
collect data, process it, and make decisions within a single system. Industrial context-
aware systems cannot collect all raw data and process it themselves. This would result
in much duplicate collection and processing of data. Rather they gather context based
on responses from other systems in the service network.
process; 2) when it can execute the next step; 3) raise alerts regarding quality assurance;
4) make use of surrounding services; In this way, a flexible manufacturing line will have
many context-aware systems that make varied decisions as the context changes. Complex
operations emerge from simple actions. Once again, the issue of information management
raises questions of scalability, security and changeability.
In-network processing as described by Tannenbaum [7] is an approach of treating the
computer network as a means of processing raw data and aggregating or presenting the
information to the requester. TinyDB [8] is an example of in-network processing applied
to wireless sensor networks. But it can also be applied to general computing. Any
network can act as a database that can be queried for information that may not exist
until the query is processed. One of the benefits of in-network processing are to avoid
network and memory overheads of centralizing data and information. Furthermore, by
aggregating and preparing results, it can help reduce the complexity systems consuming
data from multiple sources.
Another similar approach to in-network processing is Information centric networking
(ICN). It is a proposal to move from host based addressing to information name address-
ing [9]. Regardless of where the information may exist, a system sends out an “interest”
for a particular information. This “interest” is then spread throughout the information
centric network until the information is found and returned to the requester. An ex-
tension to ICN is Service Centric Networking (SCN) [10]. SCN extends ICN to include
in-network functions that process existing data/information into new information. This
is a recursive operation where information = f (information).
These three approaches provide interesting methods of collecting, processing and dis-
tributing information. However, they all tackle information management at the network
layer, requiring significant changes to the network infrastructure. This is not feasible
with current trends in technology.
However, if it is possible to achieve in-network processing at the application layer,
then network infrastructure would not inhibit adoption of ICN. Building on the concepts
such as:
MATCH (nodea:nodeTA)-[edgeb:edgeTB]->(nodec:nodeTC)
RETURN nodea
Figure 1: A simple graph query matching two nodes related by an edge. Returned is a reference
to the nodea entity. The nodea entity must match the relationship edgeb to nodec to satisfy the
query.
Cypher also supports “WHERE” clause, and also property conditions within the
“MATCH” clause. Additionally multiple “MATCH” clauses can be used within a single
query in order to build the query.
In [8] an SQL styled declarative expression is used to describe the desired data. The
SQL expression has been enhanced to allow data processing on returned data. However
from the expression the mapping to services is unclear. The expressions are usually
spread to all nodes and a middleware is used to process the query.
Here Cypher allows the both declarative queries, while maintaining a SOA, com-
position based, expression of interest for information. The Cypher query language is
introduced in order to fully define its usage in the proposed solution.
3.1 Expression
An information query must define the source of the data, the processes, and the return
format. In AF a service represents a function, including the parameter format, the pro-
cessing and the return format. The system providing the service will either get raw data
internally or process data from another system. Therefore, using Cypher to describe the
system and service graph, all three aspects are defined. A simple example of a graph
expression can be seen in Figure 2 Here the source of the information is a temperature
sensor system (the far right system) and the processing is a threshold check. An inter-
mediate system provides the thresholding service upon data retrieved the temperature
service.
MATCH ()-[ts:ThresholdService]->()
-[:TemperatureService]->({l:chamber1})
WHERE ts.value > 60
RETURN ts.address
Figure 2: A simple Cypher query to express interest in the event of a chamber exceeding a
threshold temperature.
In this case, the application system need not perform thresholding internally and
thus has reduced complexity. This may seem a trivial simplification, however, it means
that its internal logic is not dependent on the data processing. It is possible to change
the processing of data to a band-pass threshold without modification to the application
system. A change like that could be due to a quality enhancement being identified by
keeping temperature bounds within band.
parsed into a suitable composition of System of Systems (SoS). Parts of this composition
are data processing systems such as:
8. etc
In this section a form of Dynamic Service Provisioning (DSP) is proposed. DSP allows
for creation of SoS based on services which a created only when needed. This allows the
ecosystem of services to dynamically grow and shrink as required by the context-aware
systems. For example in Figure 3 the threshold system compares the temperature of a
chamber to a reference level and pushes a notification when an event is triggered.
Figure 3: A basic service composition which creates context about the chamber temperature. If
the chamber temperature crosses 60°C an event is pushed to a consuming system. The service
labeled “a” contains the expression of interest as defined in Section 3. The service labeled “b”
is the provisioning interface. Services “c” and “d” are the data exchange links which are part
of normal operation.
The systems in Figure 3 have several interfaces. Service “b” between the Orches-
tration system and the Threshold system contains 1) customization parameters and 2)
4. Proposed solution - Dynamic Service Provisioning 199
target system addressing information. This is the systems composition interface. When
a request has been received, if the threshold system is able to comply with the request, it
will provision a sub-system which contains the interfaces for service “c” and “d”. This is
called dynamic system, which creates new services based on provisioning requests. The
services are created for use in a particular system composition. Each of the subsystems
is capable of handling different customization parameters. Furthermore, the individual
subsystems are uniquely identified and have independent security controls and authen-
tication. This provides many management advantages such as Quality of Service and
logging. Figure 4 shows this as an architectural block diagram of a dynamic system
structure.
Figure 4: The block diagram of a dynamic system, able to provision new threshold interfaces
based on request. Each sub-system is independent in terms of operation, but shares a single
execution instance.
The dynamic provisioning interface will vary from system to system depending on
the customization requirement. The variation of customization requirements is large and
growing as new processes are added. SenML is a markup language used for reading and
configuring sensor devices. It is lightweight and its loose definitions do not limit semantic
meaning or limit its extension to new sensor information. Here we propose to use SenML
as a means to send the customization information to the system. By using SenML it is
possible to have a simple, reusable, library for serializing different parameters. As shown
in Figure 5 the customization parameters for the threshold system are captured fully.
Adapting this for another system is not difficult as the JSON parameter object “n” is
updated with the new parameter.
In response to such a request the processing system will give access details of the the
newly provisioned service provider. Figure 6 shows an example of a response payload.
The dynamic service instance makes use of the Arrowhead framework authorization
pattern. This means that access control by the data source allows ownership of the
data to be maintained. Where mutual commercial agreement between two systems is not
available, the dynamic system could act as a third party, so long there exists a commercial
agreement between both systems and the dynamic system. This also holds for building a
200 Paper G
POST coap://ip:port/threshold/provision
{
bn:"query.system.origin",
"e": [{"n":" reference", "v":"60"} ,
{"n":" source", "v":"coap ://192.168.7.2:5683/ value "}]
}
Figure 5: A JSON example request to provision a new interface for the threshold system. Here,
SenML/JSON content type is used. “bn” is a basename and is the system who originated the
query. “e” refers to the element array and each item of the element array are configuration pa-
rameters related for the specific processing system type. In this case CoAP is the communication
protocol, but this could also be HTTP or MQTT
{
"con":"coap ://192.168.7.2:62158" ,
"res":"</name >;rt=threshold;ct="
}
{
"e":[
{"n":" rotation", "v":1, "u":"count"},
{"n":" temperature", "v":60, "u":"Cel"},
{"n":" vibration_max", "v":4, "u":"m/s 2 "}
{"n":" vibration_min", "v":0, "u":"m/s 2 "}
{"n":" vibration_rms", "v":3, "u":"m/s 2 "}
],
"bn":"urn:dev:mac :0024 befffe804ff1"
}
Figure 7: The wheel loader ball bearing sensor data returned when queried.
would need to be repeated for each wheel loader in service. This complexity is a by-
product of the original mission of the monitoring application: 1) to schedule preventative
maintenance and, 2) to notify if early end-of-life has occurred.
Using the proposed methods, it is possible to reduce application complexity. By
allowing the SOA-based System of Systems to do the processing the application only
need handle relevant information. In this case, the expression of interest for information
is shown in Figure 8.
MATCH ()-[ns:NotificationService]->()
-[ts:ThresholdService]->()
-[:WheelNodeService]->({l:machineABC})
WHERE ts.vibration_rms > 3.3
RETURN ns.address
Figure 8: The expression of interesting using open Cypher query language and the proposed
SOA-based description. Where “ns” is the notificationService, “ts” is the threshold service, “l”
is the machine which the wheel loader node is on.
In this case first the ball bearing data must be collected and threshold tested, then
the data must be aggregated for all wheels on a malfunction wheel loader, before being
sent back to the monitoring application. This requires a dynamic threshold system and
aggregation system to be provisioned in order to present the ball bearing sensor data.
This composition is shown in Figure 9.
This means that the information has been completely processed by standardized and
reusable systems. It means that the monitoring application simply awaits notification by
the composition and then schedules the maintenance.
5.1 Implementation
The objective of this implementation is to test that interface definitions and the proposal
is sound. The base programming language is Java for applications systems. Using Java
202 Paper G
Figure 9: A diagram of the system composition which creates the information for the context-
aware monitoring application. The blocks connected with dashed line are a single module, with
the blocks above being persistent, while the blocks below being temporary.
6 Conclusion
Industry is looking to gain flexibility in growing and evolving their manufacturing sys-
tems. Applying Service Oriented Architecture (SOA) to System of Systems (SoS) engi-
neering allows flexible networked embedded systems to participate in multilayer enter-
prise systems. In particular interest of reusable systems is that of information processing.
There has been a trend to move away from centralized systems. This is due to the weak-
nesses of; 1) single-point-of-failure, 2) bottlenecks, 3) resistance to change and 4) costly
maintenance.
References 203
This work has presented a novel approach for information extraction for context-
aware systems in Industry 4.0. Context-aware computing when applied to SoS results in
intelligent systems which can make decisions based changes to location, recipe, quality,
availability, safety, and environment. This reduces, and in some cases removes, the
requirement on centralized infrastructure for data processing and storage. It also helps to
maintain soft real-time operation in the overall production line by enabling edge analytics.
Centralized systems are used to manage overall objectives and to compose SoS. However,
once the SoS is composed and the systems have the required knowledge, they can operate
without further guidance from centralized systems.
The proposed solution was showcased on a demonstrator use case of a wheel loader
ball bearing monitoring application.
This work will be continued to further refine and formalize the composition method-
ology. Building on the expression of interest for information and the dynamic systems
proposed here, the composition engine must analyze the available composition graph to
find the optimal shortest path.
In addition to the information requirement, the security overlay must be considered
for the composition. This includes strict access-control and trust rules and more malleable
trust which can be established dynamically.
Acknowledgement
The authors would like to express our gratitude towards our partners within the Far-Edge
project and the on going work within the Arrowhead Framework open project.
References
[1] H. Lasi, P. Fettke, H.-G. Kemper, T. Feld, and M. Hoffmann, “Industry 4.0,”
Business & Information Systems Engineering, vol. 6, no. 4, pp. 239–242, Aug 2014.
[Online]. Available: https://doi.org/10.1007/s12599-014-0334-4
[2] B. Scholten, The road to integration : a guide to applying the ISA-95 standard in
manufacturing. Research Triangle Park NC : ISA, 2007.
[11] T. Erl, SOA Design Patterns, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall
PTR, 2009.
[18] A. Dunkels, B. Gronvall, and T. Voigt, “Contiki - a lightweight and flexible operating
system for tiny networked sensors,” in Local Computer Networks, 2004. 29th Annual
IEEE International Conference on, Nov 2004, pp. 455–462.
Authors:
Hasan Derhamy, Mattias Andersson, Jens Eliasson and Jerker Delsing
205
206
Workflow Management for Edge Driven
Manufacturing Systems
Abstract
1 Introduction
The fourth industrial revolution is distinct from the third industrial revolution in: ve-
locity, scope, and systems impact [1]. This is because the fourth industrial revolution
involves the fusion of technology from physical, digital and biological domains. This
means new products will be developed and manufactured and traditional products will
be manufactured in new ways. Two key challenges facing traditional manufacturers in
the fourth industrial revolution involve scale and cost of digitization. Issues of scale are
not limited to just the volume of devices or inputs and outputs. But also the scale of
complexity is also increasing. Flexibility and adaptability are some of the key drivers
of this complexity. Applying traditional centralized approaches suffers from the require-
ment for availability and security. As the dynamics on the factory floor increases the
infrastructure demands increase, thereby increasing costs of backup systems.
New approaches such as the Reference Architecture Model for Industry 4.0 (RAMI
4.0) decentralize responsibility to individual components [2]. Figure 1 shows the RAMI
4.0 diagram. It shows that some enterprise logic exists even at the device level. However,
207
208 Paper H
how much logic should be included into decentralized systems? How to coordinate all
the logic?
Edge computing paradigm has been suggested to help alleviate the backhaul traffic of
large volumes of sensory traffic. However, edge computing when applied to a manufactur-
ing scenario can help to reduce the decision making path. If RAMI 4.0 is applied to edge
computing then a pattern for answering the question of coordinating and decentralizing
logic can be made. The Far-Edge project is developing a platform that enables RAMI 4.0
based application development. The platform is building on the Arrowhead Framework,
utilizing Service Oriented Architecture (SOA) thinking to divide responsibilities.
This paper proposes edge automation services that coordinate and enable productive
collaboration between Cyber Physical Systems (CPS). The edge automation services
consist of functionality required to plan, sequence and interconnect CPS systems. It uses
the Arrowhead Framework core support systems as its basis and adds new systems, in
particular for sequencing of the production process.
MES layer. Alternatively, it means that the MES layer has been decentralized across the
edge tier, with the edge automation services as the enabler.
It stands to benefit to define some of the core operations of the MES layer; Planning,
Supply chain, Inventory, Quality of Service management, Tracking and Execution.
This paper focuses on the tracking and execution aspects of the MES. Tracking is made
through the intelligent product unit that holds the planned workflow and the current state
of activities. Execution is conducted through the Workflow manager, Workflow executor
and smart objects such as intelligent equipment.
Once the MES data is available at the edge tier, EAS are able to complete the required
production steps with very little or no connectivity to any cloud based systems. Moreover,
Production Orders are also updated by EAS as a consequence of production steps being
completed. A working copy of the production order is interacted with at the edge, this is
a ”disconnected” approach. The work copy is then propagated to the master copy once
connectivity is available.
Overall, edge automation services are expected to have the following advantages over
a traditional centralized architectures:
2. Shopfloor more modular and flexible, due to individual workstation (workcell) being
autonomous.
3. Automation more reactive to shopfloor events, due to the shorter path between
business logic and sensing/actuation hardware.
There are two situations where these advantages become very apparent. Where a
workstation must be operated within an environment with no factory wide MES is avail-
able. This ”autonomous” workstation only requires that the smart product it will operate
on has an interface to communicate the workflow at the target station. This information
that the smart product contains can be loaded to the product prior to shipping from the
origin factory. Or can be transfered through some secure means to the workstation.
The second situation is that because the workflow does not specify execution details
(this is the responsibility of the workflow executor orchestration), ”plug-in-produce” is
supported for equipment installation to the work station. Meaning that at runtime an
”intelligent nutrunner” can be plugged into the workstation and utilized immediately.
2 Related work
Workflow, also known as business process, management has been a topic of research since
the start of industrialization [4]. A workflow, or business process, is the combination of
activities and information toward the creation of new value [5]. In [6], many reusable
workflow patterns are documented and existing workflow frameworks are evaluated for
support of the pattern. Workflow patterns are split into four core categories; Control,
Resource, Data and Exception Handling workflows.
In [7], Sardis et al. describe a distributed workflow and monitoring system. They
propose an architecture that combines rule based alarm detection, fault planning through
pre-planned behaviors, decision support and workflow monitoring. They suggest that a
distributed architecture such as Multi-Agent Systems (MAS) or SOA be used for real-
ization of the workflow manager and monitoring. However, they do not propose how the
workflow system should be realized.
Quintanilla et al. propose a holonic approach toward workflow execution using SOA
[8]. In this work, they propose 3 holons, the Product (PH), Order (OH) and Resource
holons (RH). The PH is responsible for exploring workflow solutions, meaning that it
performs planning and scheduling. The OH is responsible for execution of the workflow
tasks specified by the PH. While the RH provides the manufacturing services utilized
by the OH to complete the workflow. Petri-nets are used model the workflow and to
evaluate alternatives.
REST and hypermedia has been proposed for work flow automation. For example in
[9], Balis proposes a hypermedia driven workflow for data-driven scientific experiments.
In this context a data driven workflow is implemented using an Hypermedia As The
Engine Of Application State (HATEOAS) approach. Data and Tasks are addressed as
resources and a client ”executor” is responsible for following the hyper media links. The
links have embedded an appropriate state machine. Clients and resources must have
agreement on link relation meanings and resource types. The client executor knows the
overall work flow state machine required to be executed. Starting by preparing data, the
client moves through servers according to the link and relations returned by the servers.
This work closely resembles the work by Quintanilla et al. in [8]. There are clear
differences with seperation of concerns and centralization of the Order holon. Manufac-
turing systems that require segregated work centers require decentralized task execution.
Here, the proposed approach has a decentralized, stateless Workflow manager that imple-
ments a blackboard architecture pattern. The blackboard is available to all equipment
systems operating in the workcenter. Meaning that, concurrent workflow description
available can be acted upon by individual systems as needed.
3 Arrowhead Framework
The Arrowhead Framework is based on Service Oriented Architecture (SOA) [10]. It
defines Systems as the containers of Services that are exchanged between systems and
clients. The concept of local clouds is defined as a means of controlling Quality of Service
4. Work flow handling 211
(QoS), increasing security and segregating functionality. Within each local cloud there
are several mandatory core systems and support core systems. These services provide
functionality for discovery, access control, and interconnect management. Additionally
they provide interoperability, storage and configuration services. In order to describe the
plant layout and capability locations the Plant Description [11] support core system is
defined. Figure 3 shows the main building blocks of the Arrowhead Framework.
The Arrowhead Framework is an active open source project that is being contributed
to be European Union funded projects; Productive 4.0 and Far-Edge. The relevant parts
of the Arrowhead Framework are described in the proposed solution.
The production order is the fundamental data required by the edge automation services.
It is carried by CPS based smart products or a digital twin (where CPS is not available).
Figure 4 shows a block diagram of a smart product in a factory with edge automation
services.
Therefore the edge automation services have an interaction point in the smart product
unit. Also shown in the Figure 4 the smart equipment at each work cell make up I4.0
components that interact with the edge automation services.
212 Paper H
Figure 3: High level perspective of Industry 4.0 enabled smart product and work cells
Figure 4.1 is mapped to a work station (cell). The mapping is decided based on known
capabilities available at the work station. This can be done dynamically meaning that
scheduling and routing must be performed by local systems. The planning function is not
addressed in this paper. It can be assumed that the plan is statically created and that
the product unit is the container. Factory, work center, station and device capabilities
must be known when creating the production order.
that is responsible for managing progress of the production process/recipe and reporting
back to the Factory Control System (FCS).
Here the FCS is a Volvo term, however it can also be seen as a function. The func-
tion it provides is for overall coordination of the factory. The FCS is not responsible for
coordinating equipment, rather it is an information source for products and equipment
to refer when requiring information outside their direct scope. Here it can be seen that
classical MES vs. RAMI4.0 come into contact. The Control logic: Quality, Process, Lo-
gistics, etc. are placed inside the smart product/virtual twin rather than as a centralized
service. However, the FCS remains as a migration point where not all information can
be located within the I4.0 Component.
Interfaces
The Workflow Manager has 3 interfaces used to synchronize with smart product units
(CPS/Digital Twin) and 2 interfaces toward smart equipment. The Workflow Manager
consumes services provided by the product unit, making ”pull” requests for production
recipe information in a RESTful manner.
1. RetrieveProcessStep
2. StoreProcessStepResult
3. StoreOperationResult
The Workflow Manager only need identify itself (workstation id) to the product unit,
as the recipe has operations allocated at a workstation level. It is assumed that this ID
is set when the local cloud instance (edge gateway) is being setup. It must be completed
before the Workflow Manager is able to take part in the edge automation activities. It
is possible that these service interfaces will evolve to allow further parametrization as
required. Toward the edge, the Workflow Manager is a service provider and will listen
for service invocation requests in a RESTful manner.
1. RetrieveOperation
2. StoreOperationResult
The workflow manager services are designed such that they can be used by both Push
and Pull communications patterns, such as MQTT or CoAP/HTTP. CoAP has been
selected for the prototype implementation due to convenience of application libraries
and simple integration with application code. Additionally CoAP, has advantages for
RESTful communication with a push notification channel available. The usage of the
interfaces are described in the next sections.
5. Edge Automation Services 215
Operation
A Workflow Manager will remain passive awaiting notification that a product unit has
arrived at the work station. On arrival, of a new product to the work station, the
Workflow Manager will query the smart product (or its digital equivalent) for the recipe
related to the the work station. The production order will be checked to make sure any
dependencies between work station flows have been met. Only relevant steps of the recipe
will then be passed to the work station (Workflow Manager).
With the new recipe loaded, the Workflow will begin with edge systems pulling work
instructions. Once work instructions are completed, the results are sent back to the
Workflow Manager. Overall completion of the workflow is decided by the Workflow
Manager once all results are compiled. The final results are then sent back to the product
unit with confirmation. A simplified version of this is shown diagrammatically in Figure
5.2. It illustrates a typical interaction between the Edge Automation Services. Here the
third item ”Workstation equipment” groups together any individual smart objects that
can be coordinated through the Workflow Manager.
The Workflow Manager holds the digital instruction list as a form of state machine.
This state machine is passed to an executor system within the Workstation. The executor
system is customized and configured for a particular Workstation. The customizations
are both in terms of configuration parameters and capability interactions. The capability
interactions are defined by Orchestration rules, described under orchestration section.
Figure 5.2 shows a block diagram of this interaction. The Workflow Manager passes the
instruction list as a hypermedia enabled state machine. Using IETF RFC 5980 and 6690,
namely, Web Linking and Core Link Format, it is possible to describe the state machine
as a set of related resources.
Figure 6 shows the system block diagram of the use case solution. In the figure the
FCS Adapter is a temporary system that serves as a migration path. It serves the same
function as the Product Unit Hub described in the original platform.
218 Paper H
Figure 7: Block diagram of RAMI 4.0 enabled work station utilizing Edge Automation Services
from the Far-Edge platform and the Arrowhead Framework.
The Workflow manager and executor systems can be seen in Figures 6 and 6. These
systems create the link between the CPS Smart Product and Smart Equipment. The
Service links are discovered through the Arrowhead Service Registry and Orchestration
Systems.
Figure 8: I4.0 enabled Workflow Manager System with administrative shell and services. Up
stream services are on the left side and down stream services are to the right.
Figure 9: I4.0 enabled Workflow execution system with administrative shell and services. Up
stream service connecting to the Workflow manager are to the left and down stream services
connecting to the smart equipment and objects are to the right.
Figure 10: I4.0 enabled Nutrunner with administrative shell and services. DoJob and NotifyRe-
sult are standard services for generic equipment requests.
7 Future work
Still to be added is the configuration integration and the plant description updates. Being
able to generate the Orchestration rules through the plant description and then utilizing
a workflow through the Workflow Manager and Executor systems. Fully utilizing the
Next Generation Access Control mechanism in the Arrowhead Authorization system.
8 Conclusion
Demonstrated in this paper is an approach to decentralizing the ISA 95 architecture
utilizing the RAMI 4.0 approach on edge computing and SOA-based system design.
A Workflow Manage is proposed as the integration point of MES data between smart
products and smart equipment. Reducing the demands on centralized infrastructure
whilst maintaining centralized resource planning of the overall production. The proposed
solution is demonstrated on an industrial use case.
Summarizing the Arrowhead concepts can be described as:
220 Paper H
Acknowledgment
The authors would like to thank the Far-Edge and Productive 4.0 projects for funding
and sharing thoughts through discussion and debate.
References
[1] “The Fourth Industrial Revolution - What it means and how to respond. World Eco-
nomic Forum,” Jan 2018. [Online]. Available: https://www.weforum.org/agenda/
2016/01/the-fourth-industrial-revolution-what-it-means-and-how-to-respond/
[3] E. A. Lee, “Cyber physical systems: Design challenges,” in 2008 11th IEEE In-
ternational Symposium on Object and Component-Oriented Real-Time Distributed
Computing (ISORC), May 2008, pp. 363–369.
[4] W. M. van der Aalst and K. M. van Hee, Workflow Management: Models, Methods,
and Systems. The MIT Press, 2000.
[5] W. M. P. van der Aalst, A. H. M. ter Hofstede, and M. Weske, “Business pro-
cess management: A survey,” in Proceedings of the 1st International Conference on
Business Process Management, volume 2678 of LNCS. Springer-Verlag, 2003, pp.
1–12.
Authors:
Hasan Derhamy, Jens Eliasson and Jerker Delsing
223
224
System of Systems Composition based on
Decentralized Service Oriented Architecture
Abstract
1 Introduction
Initiatives in the manufacturing industry are looking to boost productivity by leveraging
advances in connectivity. The increase in connected devices is due, in large part, to
increases in low cost networking and computing. This has meant that there is a significant
increase in the volume and flow of information. All these developments have occurred
within a landscape of ISA 95 [1] deployments. ISA 95 supports vertical integration
225
226 Paper I
and information flow. However, this also created bottlenecks from many perspectives.
Comparing centralized and decentralized information flow:
Storage Centralized storage exhibits database bottlenecks, meaning that storing and
retrieving data is usually restricted to ensure that the database is not over loaded with
requests. Local storage handles requests within its local scope, therefore less requests
would be present. Also, if a decentralized database does experience reduced performance
due to over loading, the area of impact is local and does not reach to wider applications.
Communication Centralized storage must have strong networks. The network be-
comes a bottleneck as traffic increases, meaning close monitoring and costly backbone
infrastructure is required. Redundant networks become needed to reduce down time in
case of network failure. Performance degradation on the network also has a wide impact.
On the other hand with local storage, network performance degradation has a lesser
impact on normal operations.
Engineering Centralized systems will have a specific set of competencies such as data
warehousing, network administration, server maintenance, etc. These specialists are
required to handle scaling up infrastructure and tools. While localized systems do not
require specialized knowledge for scale. They simply do not reach such demanding levels
and so competent general engineering skills are sufficient.
Leading on from ISA 95, Industry 4.0 initiative has proposed the Reference Architec-
tural Model for Industry (RAMI) 4.0 [2] and I4.0 Component model [3]. The I4.0 compo-
nent model captures the notion an administration shell abstracting digitized equipment
and products with high levels of connectivity. A key diagram of RAMI 4.0 is a 3-
dimensional pyramid/cube. It represents that I4.0 components at different ”hierarchies”
must go through an individual ”life-cycle” and participate at each ”layer”. Or from a
layer point of view, a single ”layer” cannot be confined to a single level of the ”hierarchy”.
Rather a single ”layer” is spread across many levels of the ”hierarchy” and present at
each stage of the components ”life-cycle”. This is shown in Figure 1.
A central philosophy of the RAMI 4.0 approach is that connectivity is no longer purely
vertical. I4.0 components can communicate with one and other vertically, horizontally,
or diagonally. Due to layers being spread across the hierarchy, decision making is no
longer centralized. If information flow is flexible in this manner, then certain decision
making can be made on the factory floor rather than in an MES cloud. Meaning that
work centres, cells and stations could have a much higher level of autonomy. Reducing
1. Introduction 227
the reliance on centralized decision support will also reduce the software and networking
dependence in between physical work cells. Meaning if one work cell malfunctions, then
others may not suffer any performance degradation.
In addition, internal development teams may not be able to handle the complexity
and size of the automation solutions. Involving multiple vendors in this development
indicates that the networked systems may not all belong to a single owner. Equipment
as a Service may be a viable manner to distribute life cycle costs (design, development,
maintenance) of such systems. This means that security assumptions regarding privacy,
confidentiality and trust must be reviewed. Heavy integration costs and information
blockages must be avoided between multi-vendor systems. This means that each system
is able to establish trusted and confidential connections for exchanging data.
The final aspect and consideration, is that software technology evolves at a rapid
pace, much more quickly than traditional mechanical or electronic technologies. Software
development practices tend toward adoption of Agile [4] practices. This entails short
development cycles with small releases and fast deployment. The solution therefore must
support incremental change with minimized impact due to disruption of operations. In
addition software technology evolution means that the approach toward and treatment
of software technology must be one of embracing change. A successful solution must have
a clear path for change and ensure that it is not bound to a single technology that will
be expensive to change.
Service Oriented Architecture is one such architectural approach that tackles technol-
ogy abstraction through abstract messaging interfaces. However, IIoT systems are not
pure software, they involve hardware and this hardware will create physical dependencies
between services offered from the hardware. Hence, utilizing System Theory with ser-
vices being offered and utilized. Modular and independent systems, which can be pure
228 Paper I
software or embedded software, are modelled as black box with technology independent
service interfaces between one and other for communication and collaboration.
The proposed solution is designed to satisfy the above requirements. The solution
proposes a software system that is able to process information queries and setup com-
munication routing to produce and move that required information. To achieve this, a
graph model is proposed. The graph model is required to support reasoning about the
System of Systems. The queries is also based on a standard graph query language Open
Cypher. The next section will introduce some related work before the presenting the
proposed solution in Section 4.
3.1 Mashups
The notion of mashups comes from mixing and combining the presentations of different
data sources into a single view. For example, Twitter and Facebook feeds can be viewed
on one page together with Instagram updates. This page would be a mashup of three
different social media sources. Another example Trendsmap [5] is a web application that
3. Related Works and Technologies 229
utilizes Twitter and Google Maps APIs to show Twitter activity by location. Meaning
that users are able to view what the trending Twitter topics are in particular cities. The
same can be applied for sensory data from multiple sensors or equipment KPIs along
with the current production orders.
Mashup approaches such as the one offered by ThingWorx [6] are a great example of
run-time based creation of information flow. ThingWorx is a business oriented platform
for connecting IoT devices with different protocols and routing data between devices.
Access to IoT data and so the visualization of related data from different sources is a
primary driver. The Web of Things have developed WoTKit [7] for building mashups.
The WoTKit targets web mashups and meaning that only web servers and clients are
used. However, these solutions both relay on centralization of either the data or the
distribution the data. Node-red [8] is Javascript based graphical environment that can
mash together different services into a composition. Its structure is similar to typed pipe
and filter diagrams. The IFTTT (“if this then that”) platform [9] is an IoT mashup
cloud that supports applets and services that can connect heterogeneous things together.
Using chained conditionals it is a strong commercial mashup platform.
object to reach the same operation. When evaluating the policy, the evaluation engine
must have reliable information regarding the current attributes of the object and the
user. Therefore a secure information extraction solution must work with context aware
security such as NGAC.
4 Proposed solution
The solution proposes a software System, called the System Composer, that is responsible
for receiving the information query, processing the query against the system, service and
access control graphs, formulating an information flow path and issuing the orchestration
rules for the SoS. The system that originated the information query receives instruction in
the form of an address that it can message to retrieve the result of the SoS. Alternatively
an MQTT topic could be provide publish/subscribe channel for the system to receive
information from. This is the simple interface required of the System Composer for the
5. A Graph analysis of Arrowhead Framework 231
proposed solution. The System Composer service interface is discoverable through a ser-
vice registry and its access rights is secured through an NGAC enabled Authorization
System. Internally the System Composer relies on a graph of the local scope systems,
service instances and types and access rights. This graph is based on the proposed graph
model presented in Section 5. It is built through interaction with Arrowhead; Device,
System and Service Registries; With Service Inventory and Authorization System. The
System Composer is stateless in terms that non of its memory is persistent. The graph is
rebuilt on startup and maintained by active communication with the mentioned Arrow-
head Framework core support systems. The information query is expressed in a graph
query using Open Cypher. The complete solution is presented in the following sections,
first a graph model of an SoS enabled SOA Framework, the Arrowhead Framework, is
presented.
As shown in Figure 2 these four entities form the base vertices of the Graph.
The Arrowhead Framework Uses these primary blocks to builds Systems of Systems
(SoS). To build an SoS, the primary entities must be related to one another. These
relations are:
With this set of edges, it is possible to construct a simple path between systems.
For example the path can be functional using Required and Provided By edges, or a
communications path based on Supported and Provided edges. A sub-graph with a
functional path is illustrated in Figure 3. Here it can be seen that the path makes up a
bipartite graph. There are only edges between the two sets of System nodes and Service
Definitions. There are no edges within either set.
G={V,E}
Where
V={A,B}
| B| = | A| − 1
E=B*2
5. A Graph analysis of Arrowhead Framework 233
The communications graph is built from a set of participant Systems and Service
Interfaces. To build this graph the Service Definition is used as an anchor to the Service
Interfaces. In this way it is possible to move from the functional sub-graph to the
communications sub-graph.
This sub-graph can be seen in Figure 4.
Figure 4: A first System of Systems: A bipartite communication graph with two participant
Systems.
G={V,E}
Where
V={A,B}
| B| = | A| − 1
| E| = | B| ∗ 2
or
| E| >= (| A| − 1) ∗ 2
therefore
| V | = | A| ∗ 2 − 1
Figure 6: The NGAC graph entities/nodes. User as ‘usr’, Attribute as ‘attr’, Object as ‘obj’
and Operation as ‘op’.
Figure 7 shows the graph relations between the Arrowhead Framework nodes and the
NGAC nodes. Utilizing the system nodes and the service interface node as anchors the
related User, Object and Operation nodes can be found. Next a path must be traversed
between the two systems through only the NGAC graph nodes.
A complete path, in the NGAC graph, between the systems means that the required
attribute permissions exist for the consuming system to invoke the operation upon the
6. Building Systems of Systems using graph queries 235
providing system. With these three primary graphs it is possible to build validated and
secured Systems of Systems. The next section will discuss the method of utilizing the
graph models.
Run-time Orchestration When the SoS specification is run against a live graph, the
service exchanges between systems are identified. These identified service exchanges are
what generate the Orchestration rules.
A full graph of the local cloud can be maintained by synchronizing the different data
stores. The graph data does not change rapidly as there is limitations on how quickly a
Cyber Physical System can be relocated. Depending on the protocol choice this could
require polling the data stores for changes. However, to avoid this, the data stores can
notify the graph that a change has occurred, allowing the pull from data sources to be
event driven. At no stage does the graph make changes to the data stores. Therefore
there is no write synchronization required.
As described in the next section, the queries made to the graph are only read based,
changes such as injection of local bridges are either not reflected in the data stores, or
are changed through the normal Device, System, Service registration.
6. Building Systems of Systems using graph queries 237
Function Path
To find the function path a combination of known Systems and Service interfaces must be
anchored (known). The information which makes up this graph is formed using System
Registry and Service Inventory data. Given a start System, an ordered set of services
and an end System, a functional path can be built using the following query:
The query will return all nodes and relations which create the path. The nodes will
consist of a set of Systems and a set of Service Definitions. These two sets are then used
to find the communications path.
238 Paper I
Communications Path
To find the communications path, each of the Service Definition and System pairs must
form into a triple with the interface definition. A consuming system supports an interface
definition and the interface definition is offered by a providing system. Hence, Figure 9
shows the result of query in Equation 2 that is the consuming triple.
MATCH
(a : System{name : “a”}) − [:Supports]− > (inst),
(inst) − [:Implements]− > (b:Type{name : “b”})
RETURN inst; (2)
Figure 10 shows the results of the provider side query shown in Equation 3.
MATCH
(a : System{name : “c”}) < −[:Provided by] − (inst),
(inst) − [:Implements]− > (b:Type{name:“b”})
RETURN inst; (3)
A connected communication path will match to a single interface definition that means
the two systems utilize interoperable services. This is shown in Figure 4 which has a
connected functional path between the two participant Systems. Where a communication
path is not found the resulting graph will look like Figure 11. In this situation, the System
of Systems can be adapted to created a connected communication path.
This case is covered in the system composer implementation section that uses the
translator to inject a local bridge and create a communications path.
7. Implementation of a System of System Composer 239
Figure 11: A broken communications path is shown here. The two systems support and provide
different interface designs for the same service definition.
Authority Path
Using the Arrowhead and NGAC relationships, it is possible to shift from the functional
domain to the security domain. An attribute path must be found between the providing
system and consuming system which passes through the operations which represent the
service instances. The attribute designation is made through a secure management portal.
The requirement for attribute policy is also made through a secure management portal.
Hence, this part of the graph is certainly read only. A break in the security path will
mean that there is no authorization for service exchange between the participant systems.
The graph query to find the security path must first identify the nodes which represent
the users, objects and operations. Using the definitions from Figure 7 each interface
definition is mapped to a single NGAC operation. A System can have at most one
NGAC Object and one NGAC User.
A path must be found between the User that is the Alias of the Consuming System,
the Operation defined by the Interface Definition, and the Object represented by
the Providing System. Figure 7 shows a generic example of the authorization path.
Systems.
The System of System Composer is a tool which can be utilized by technical and non-
technical users to compose new Systems combinations. It uses the processes described
to build a graph of the device, system, service and security landscape.
To navigate the graph an anchor needs to be located. This is the starting system and
is usually found through direct reference to a unique system name, or through a unique
combination of meta-data connections.
The System Composer is itself a participant in an SoS. Figure 8 draws the SoS required
to generate the graphs. The graphs are kept up to date through polling and notifications
between the Systems in Figure 8.
Utilizing Open Cypher, the responses from the data stores are stored in a Neo4J
instance using the following queries:
A merge operation is used to avoid creating duplicate nodes and edges. Device data
can have many nodes connected to it. They represent connected meta-data. For example
a device is connected to a location. This can be physical location or logical location. No
edge label is required for meta-data nodes.
MERGE
(dev:Device{name:’A8076’}),
(loc:Location{name:’A2301’}),
(dev) − − > (loc),
System data will include an association to a device which is hosting the system, the
services provided and consumed from the system and any connected meta-data associated
with the system. A sample Cypher command to store the minimum System data:
MERGE
(sys:System{name:’Nutrunner’}),
(jobServiceOffering:ServiceType{name:’EquipmentJob’}),
(notifier:ServiceType{name:’EquipmentNotification’}),
(ins:ServiceInstance{name:’showworkorder-presenter’}),
(jobServiceOf f ering) − [:OFFERED BY]− > (sys),
(notif ier) − [:OFFERED BY]− > (sys),
7. Implementation of a System of System Composer 241
MERGE
(ser:ServiceType{name:’showworkorder’}),
(sys:System{name:’presenter’}),
CREATE
(ins:ServiceInstance{name:’showwork’}),
(sys) − [: OF F ERS]− > (ins),
(ins) − [: OF ]− > (ser);
To store the Service data (resulting from a query to the Service Inventory):
MERGE
(sd:ServiceDefinition{name:’EquipmentJob’}),
(idd:ServiceType{name:’EquipmentJobCoAP’}),
(sd) − [:IMPLEMENTED BY]− > (idd)
Once the data has been collected and the graph has been built, the graph can be
presented to a user to graphically draw the SoS composition.
The service interconnections are primarily and in some cases exclusively used to spec-
ify the SoS. This is achieved by querying the graph with a MATCH query like below:
When only validating the possibility of an SoS intermediate and end systems can be
left anonymous. Otherwise, while generating the SoS, the path query should return the
intermediate and end systems. It is possible that alternative paths exist, meaning that
there are alternative service providers which can satisfy the specification. In the next
stage, concrete SoS compositions can be generated by looking at the communications
path. To do this, shared service interfaces that implement the service definitions must
be found between the intermediate systems. The queries defined earlier are used to build
the communications path:
So long as the consumer service interface ”consumerInt” and the provider service
interface ”providerInt” are the same node, then this means that the communication path
is valid. Building the communications path means performing this query on each pairing
of Systems found from the functional path query.
The communications path is complete once the bipartite graph connecting systems
and service interfaces has been formed. In case that the communications bipartite graph
242 Paper I
MATCH
(sys:System{name:”}),
(sd-1:ServiceDefinition{name:”}),
(sd-2:ServiceDefinition{name:”}),
(sd-3:ServiceDefinition{name:”}),
(sys) − [:Requires]− >
(sd-1) − [:Offered By]− >
(sys-1:System) − [:Requires]− >
(sd-2) − [:Offered By]− >
(sys-2:System) − [:Requires]− >
(sd-3) − [:Offered By]− > (sysEnd:System)
RETURN
sys-1, sys-2, sysEnd
is disconnected it can be assumed that there are at least two systems whom do not share
a common service interface for the same service definition.
Once a valid communications path has been found the System composer checks for
a valid authorization route between participant systems using those service interfaces.
Recalling the NGAC mapping utilized in Section 6, each service interface is mapped to
an NGAC operation, and each system is mapped to a user and/or object depending on
if it will consumer or provide the operations. The Authorization path can be built using
the query in Figure 14. The providing system, consuming system and operation nodes
are inputs to this query.
This query must be executed for each System pairing within the SoS. If the path
is successfully returned then a valid authorization can exist between the participant
systems.
The System Composer now is able to create and validate Functional SoS, check for
communications path, and valid authorization permissions.
7. Implementation of a System of System Composer 243
MATCH
(sysCon)-[:Supports]->(conInterface),
(conInterface)<-[:Implemented By]-(sd),
MATCH
(sysPro)<-[:Provided By]-(proInterface),
(proInterface)<-[:Implemented By]-(sd)
WHERE
(conInterface = proInterface)
RETURN
conInterface, proInterface
Figure 13: A query to identify the matching service interface, given a common service definition
and two participant systems.
M AT CH
(sysP rov) − [: Represented By]− > (obj : Object),
(sysCons) − [: Has Alias]− > (usr : U ser),
p = (usr) − [: ∗]− > (opr) − [: ∗]− > (obj)
RET U RN p
Figure 14: Build and check for authorization path for a given set of communications paths
quests to this service result in a new function being created within the host System.
The new function has service interfaces as required to match the specific requirements.
For example, a ”consumer” service interface for initiating ”pull” based interaction or a
”provider” service interface to ”push” based interaction.
These dynamic systems are able to create local bridges, connecting disconnected
graphs. However, to allow interoperability between dynamic systems from different de-
velopers the interface for provisioning must be standardized. In [18] the dynamic systems
were provisioned using SenML. Hypermedia As The Engine Of Application State (HA-
TEOAS) is a method of creating dynamic APIs for web based applications. IETF Stan-
dards RFC 5988 [19] and 6690 [20] introduce web linking as a way to achieve HATEOAS
applications. In addition IETF draft ”draft-ietf-core-interfaces-10” [21] (core-interfaces)
builds on RFC 6690 to propose a set of standard interfaces for interacting with URI
resources. Here we refine the provisioning interface of [18] to utilize the core-interfaces
specification. The specification handles link-format and SenML for parametric values.
The Translation System proposed in [22] is an example of a dynamic system. The
Translation System is reliant on protocol information and link information to build the
protocol translator. The dynamic provisioning request is shown in Figure 15.
[
{
”name ” : ” dynamic ” ,
” value ”:” t r a n s a l t o r ”
},
{
”name ” : ” s e r v i c e ” ,
” v a l u e ”:” < coap : / / { i p : p o r t }/ s e r v i c e i n s t >;
r t =[ s e r v i c e t y p e ] ;
i f =[ i n t e r f a c e t y p e ] ;
c t =[ c o n t e n t t y p e ] ;
r e l=p r o v i d e r ”
},
{
”name ” : ” s e r v i c e ” ,
” v a l u e ”:” < mqtt : / / { i p : p o r t }>;
r t =[ s e r v i c e t y p e ] ;
i f =[ i n t e r f a c e t y p e ] ;
c t =[ c o n t e n t t y p e ] ;
r e l=consumer ”
}
]
Figure 15: A run-time request to the Translation System to provision a new protocol translator.
With this information, the translator is able to provision two interfaces that will interoperate
with the specified interfaces.
7. Implementation of a System of System Composer 245
Once the protocol translator and its interfaces have been provisioned the response is
sent. It contains link information, linking to the newly constructed service interfaces.
The translator’s response is provided in Figure 16.
[
{
”name ” : ” r e s p o n s e ” ,
” value ”:” t r a n s l a t o r ”
},
{
”name ” : ” s e r v i c e ” ,
” v a l u e ”:” < mqtt : / / { i p : p o r t }/ t o p i c >”;
r t =[ s e r v i c e t y p e ] ;
i f =[ i n t e r f a c e t y p e ] ;
c t =[ c o n t e n t t y p e ] ;
r e l=p r o v i d e r
}
]
Figure 16: The response from the Translation System once a new protocol translator instance
has been provisioned. Only the new provider interface is shown, this should be compatible with
the specified consumer interface to be satisfied.
Dynamic Systems are able to bridge two Service providers that require information
exchange. Because service providers cannot pro-actively seek out service consumers, then
two Providers or two consumers cannot communicate and so must have a bridging. In
this case rather than a translator a dynamic ”proxy” can mediate the two Systems. The
dynamic systems being HATEOAS based systems themselves will return links to the
dynamic resources from the base URI of the service. The IETF core-interfaces draft
defines Sensor, Actuator and collection interfaces. However, in the case of configuring or
provisioning dynamic systems more complex configuration parameters are required. Here
we propose to add a new interface that will enable transport of provisioning information
for dynamic systems. The interface which is based on the example above, includes SenML
with Link-Format embedded as an element. Link-Format items can point to external
definitions or service endpoints. The ”rel” element is used to understand the meaning of
the link.
<coap : / / { i p : p o r t }/ s e r v i c e i n s t a n c e >;
r t =[ s e r v i c e t y p e ] ;
i f =[ i n t e r f a c e t y p e ] ;
c t =[ c o n t e n t t y p e ] ;
r e l=p r o v i d e r ,
MATCH
(sysCon)-[:Supports]->(conInterface),
(conInterface)<-[:Implemented By]-(sd),
MATCH
(sysPro)<-[:Provided By]-(proInterface),
(proInterface)<-[:Implemented By]-(sd)
WHERE
(conInterface = proInterface)
RETURN
conInterface, proInterface
Figure 18: The information capture, storage and processing requirement specification.
The new System composition is routing new data from the Nutrunner system through
a time calculator and storing the results in a Historian System. The GUI system is able
to get the latest up-to-date KPI from the Historian at any time.
The System Composer takes this specification and builds functional, communications
and authorization graphs. Once the different paths are validated, the validated commu-
248 Paper I
nications path is turned into composition rules that can be used by the Orchestration
engine. The new SoS has not impacted operation of the station. It is running along
side normal SoS compositions that achieve the station objectives. Collecting data in
this manner can occur over any period of time. After a suitable length of time, the
information can be used to report an inefficient process.
The suggested process change is to move the task using the Nutrunner to the next
station. Existing tasks at the next station involve orienting the product in a manner
which provides more ergonomic access for Nutrunner operation. Hence, human operators
are able to complete the task more quickly and efficiently. The SoS Composition can be
moved to the next station. Allowing an identical data collection and processing and
comparison of the change. This is all done at the edge, meaning that local scope changes
to SoS compositions do not impact centralized infrastructure.
9 Conclusion
This paper has presented a method to dynamically extract information from its source
within an IIoT network. Based on the usage of SOA and SoS theory, it is possible to 1)
visualize the IIoT as graph elements, 2) express an information query in the form of data
sources, transformations and sinks and 3) orchestrate communications paths and dynamic
bridges. The proposed graph model is used to capture the SOA and SoS elements. With
separation between functional and communications specifications allowing for multi-level
reasoning on requirements. Where a graph path is disconnected between collaborating
systems a local bridge can be injected, thereby connecting the systems. The local bridge
could be a translator (at the communications path), or a data manipulator (i.e. filter
or trigger) (at the functional path). The graph model goes further to map the SoS and
SOA architecture to the NGAC model. Thus enabling the proposed System Composer
to evaluate access control policies to ensure that SoS will be attainable under current
conditions. The System Composer does not hold the NGAC policy information, but is
allowed to query the policy information and retrieve the rules. The System Composer
retrieves the System context information and thus has the knowledge required to reason
upon likely hood of access being granted between systems. The System Composer can
be interacted with by domain engineers on the factory floor. Enabling them to modify
or create KPI algorithms at run-time without support of IT specialists. The results
show dynamic manipulation or creation of an SoS during run-time by domain engineers.
Because of local information access, centralized datastores (warehouses) are avoided.
Acknowledgement
The authors would like to thank the Far-Edge and Productive projects for support in
conducting this research.
References 249
References
[1] B. Scholten, The road to integration : a guide to applying the ISA-95 standard in
manufacturing. Research Triangle Park NC : ISA, 2007.
[2] “Reference Architectural Model Industrie 4.0 (RAMI4.0) - An Introduction,”
April 2018. [Online]. Available: https://www.plattform-i40.de/I40/Redaktion/EN/
Downloads/Publikation/rami40-an-introduction.pdf
[3] “Structure of the Administration Shell,” April 2018. [Online]. Avail-
able: https://www.plattform-i40.de/I40/Redaktion/EN/Downloads/Publikation/
structure-of-the-administration-shell.pdf
[4] A. Cockburn and J. Highsmith, “Agile software development, the people factor,”
Computer, vol. 34, no. 11, pp. 131–133, Nov 2001.
[5] “Real-time twitter trending hashtags and topics.” [Online]. Available: https:
//www.trendsmap.com/
[6] Thingworx industrial innovation platform - ptc. PTC. [Online]. Available:
https://www.ptc.com/en/products/iot/thingworx-platform/
[7] M. Blackstock and R. Lea, “Iot mashups with the wotkit,” in 2012 3rd IEEE Inter-
national Conference on the Internet of Things, Oct 2012, pp. 159–166.
[8] “Node-red.” [Online]. Available: https://nodered.org/
[9] “Ifttt helps your apps and devices work together.” [Online]. Available:
https://ifttt.com/
[10] J. Delsing, Ed., Arrowhead Framework: IoT Automation, Devices, and Maintenance.
CRC Press, 12 2016. [Online]. Available: http://amazon.com/o/ASIN/1498756751/
[11] F. Blomstedt, L. L. Ferreira, M. Klisics, C. Chrysoulas, I. M. de Soria, B. Morin,
A. Zabasta, J. Eliasson, M. Johansson, and P. Varga, “The arrowhead approach for
soa application development and documentation,” in IECON 2014 - 40th Annual
Conference of the IEEE Industrial Electronics Society, Oct 2014, pp. 2631–2637.
[12] V. Hu, D. F. Ferraiolo, D. R. Kuhn, R. N. Kacker, and Y. Lei, “Implementing and
managing policy rules in attribute based access control,” in 2015 IEEE International
Conference on Information Reuse and Integration, Aug 2015, pp. 518–525.
[13] A. S. Tanenbaum and M. v. Steen, Distributed Systems: Principles and Paradigms
(2Nd Edition). Upper Saddle River, NJ, USA: Prentice-Hall, Inc., 2006.
[14] S. R. Madden, M. J. Franklin, J. M. Hellerstein, and W. Hong, “Tinydb:
An acquisitional query processing system for sensor networks,” ACM Trans.
Database Syst., vol. 30, no. 1, pp. 122–173, Mar. 2005. [Online]. Available:
http://doi.acm.org/10.1145/1061318.1061322
250
[19] M. Nottingham, “Web linking,” Internet Requests for Comments, RFC Editor, RFC
5988, October 2010, http://www.rfc-editor.org/rfc/rfc5988.txt. [Online]. Available:
http://www.rfc-editor.org/rfc/rfc5988.txt
[20] Z. Shelby, “Constrained restful environments (core) link format,” Internet Requests
for Comments, RFC Editor, RFC 6690, August 2012, http://www.rfc-editor.org/
rfc/rfc6690.txt. [Online]. Available: http://www.rfc-editor.org/rfc/rfc6690.txt
Authors:
Hasan Derhamy, Jens Eliasson and Jerker Delsing
251
252
Software Architectural Style for the Industrial
Internet of Things
Abstract
Technology has played a key role as an enabler of societal evolution and revolution.
Mechanical machines of the 1800’s gave way to electromechanical machines, then to
software controlled machines and now to fully digitized and networked machines. Rec-
ognizing the fourth industrial revolution, the Industry 4.0 initiative was founded to help
bring about and drive the desired change. Computing ubiquity, interconnectedness, intel-
ligence and delegation drive Industrial Internet of Things applications complexity. The
result is fragmented software intensive nodes that often lack a strong architectural ba-
sis. Within IIoT, software solutions are distributed over heterogeneous communications
links, computing platforms and business domains. These platforms are likely physically
separated over large distances, or could be highly mobile with large variations in security
requirements. The scale of complexity has been unseen previously in the software in-
dustry. A lack of architectural rationale will lead to brittle IIoT applications that resist
evolution.
This paper proposes an architectural style for the Industrial Internet of the Things. It
develop a set of principles that guide architects and designers to decompose and create re-
silient, secure, scalable and evolutionary IIoT applications. Several views are introduced
with reference to concerns derived from the Reference Architecture Model for Industry
(RAMI) 4.0. The style and principles are not created in vacuum and primary source of
inspiration is referenced to Systems of Systems theory and Service Oriented Architecture.
The architectural style creates clear traceability from business and domain requirements
through to device integration.
1 Introduction
The world of computer systems, including software and hardware, are in a constant state
of evolution. Software is taking a growing proportion of the responsibility for overall
system functionality. It has become a crucial tool of almost all industries from bakeries
and arts to manufacturing facilities and banking. As systems gain more software aspects,
managing software complexity and heterogeneity becomes a key challenge [1]. In the late
80s and early 90s, it was already well recognized that software architecture is a key
to manage the complexity. This early work laid the foundation for the area of software
architecture [2]. It is the study of the form and structure of software abstractions intended
253
254 Paper J
Figure 1: ISA 95 pyramid with Information and Operational Technology domains shown.
computing moves, the more challenging deterministic and real-time operation becomes.
Alternatively, OT can help with achieving determinism and real-time operation require-
ments. Modern Micro-Controllers (MCU) can handle a greater share of the processing
power requirements. This means that some IT logic is moved to the OT domain. How-
ever, there is still a strong reliance on a centralized IT infrastructure, creating a single
point of failure and bottleneck. Additionally, traditional OT methods are not yet as
capable of handling large scale complexity when compared with IT software architecture.
Thus, these solutions rely on vertical integration with clear separation between IT and
OT. It is becoming more clear that this separation of concern, due to technology domain,
rather than functionality could be reduced or broken. A third option is to combine IT
and OT domains - becoming IIoT [6]. This pushes away the pure vertical integration
and forces a holistic view of the software architecture on each node. Key enabling tech-
nologies and methods for this change is in Edge Computing [7][8] and Cyber Physical
Systems [9].
What is software architecture and how does it impact implementation design? Un-
constrained software can satisfy most user requirements in an almost unlimited number
of ways [10]. Software is made through a series of design decisions until implementation
in code. Each design decision made adds some constraint to the solution and each con-
straint cascades to downstream design decisions. Early decisions, such as architectural,
are fundamental; they have a higher and longer lasting impact on the overall solution.
Hence, software architecture should only constrain critical (load bearing) parts of the
solution [2]. Thereby allowing as much flexibility for solution variation as possible. This
practice is in-line with development methods such as Agile [5].
Software design has major implications on migration toward Industry 4.0 [11]. When
designing new solutions legacy systems, infrastructure and procedures must be taken into
account. For industrial automation this is acute because some equipment life-cycles can
span well over a decade. Software architecture must be designed to accommodate legacy
technology. On the other hand, software architecture must also minimize constraints
on how future (unknown) technologies can be integrated. Therefore, the software ar-
256 Paper J
chitecture will evolve while the components may have limited ability to change. As the
architecture adapts for changing requirements and technologies, its architectural style
acts as a anchor through this evolution. An architectural style is the language or man-
ner in which the concrete architecture is described. It utilizes abstract concepts that
communicate intent and rationale of the software architecture. What would a suitable
architectural style be for engineering IIoT applications?
The paper is structured as follows: Section 2 discuss software architecture practice.
Section 3 gives overview of advanced architectural styles. Section 4 and 5 discusses
principles of system theory and Systems of Systems (SoS). These principles and earlier
discussed architectural principles give basis to the proposed software architectural style.
Section 6 discusses challenges of software engineering in IIoT. These are addressed the
proposed software architectural principles presented in Section 7. Section 8 presents the
proposed decomposition method that guided by the architectural principles and views
that are presented in Section 9. This section presents the proposed architectural style.
Section 10 concludes the paper with final remarks and outlook to future work.
Figure 2: The 4+1 architecture with views mapped to the RAMI 4.0 software layers. The
business, communication and integration views are not represented in this architecture. On the
other hand, RAMI 4.0 does not have a development view.
The logical view is an entity model of the design. For example this could be an object
based decomposition. Here, application data and information models could be captured,
however, because the focus is on end-users or domain engineers, it may lack the needed
depth for software development. The development view is a detailed break down of the
structural aspects of the organization of the code. This assists with division of work
between developers and dependency matching of development resources. The process
view is an execution view of the design, looking into concurrency and synchronization
aspects, capturing their transformations and relationships. The physical view shows
258 Paper J
the distribution of the software over the executing hardware. Capturing certain non-
functional aspects of the design and guiding the deployment requirements and planning.
Interestingly, Krutchen did not utilize separate connection view and data view as
Perry and Wolf did. The 4+1 architecture does however pin-point that each view targets
a different audience, this is an important element of architecture practice. Users are
interested in functionality presented in the logical view. Developers would split workload
and design code repositories based on the development view. Technical and business
integrators tune performance and scalability using the process view. System engineers
plan topology, communications and deployments based on the physical view. Kruchten
further elaborates that each view has its own set of elements, patterns, rationale and
constraints. Within each view different architectural styles can be used. The 4+1 style is
a meta style that allows freedom to adapt the architecture to a minimal set of constraints
for the intended usage. Scenarios are used as a shared validation vision - to make sure
that requirements are satisfied by each of the architectural views.
The World Wide Web (WWW) which morphed into a key technology over the 90’s
lacked a clear definition of its architectural style. Without a clear style, changes to the
WWW architecture lacked consistent stylistic principle and were be based on short term
solutions to immediate issues. In his dissertation, “Architectural Styles and the Design
of Network-based Software Architectures” [14], Fielding defines an architectural style
for the WWW and web application development. This study addressed this issue and
provided guidance on adapting and evolving WWW protocols and techniques. The the-
sis introduces software architecture as an abstraction of run-time elements. It describes
that an architecture is not only a static structural description of a system, but rather a
description of a dynamic running system. The resulting Representational State Transfer
(REST) architectural style has become a common approach to web application develop-
ment. REST combines a number of architectural styles, including Client-stateless-Server
with caching and proxying; layered; mobile code; and uniform interface. Each style uti-
lized by REST introduces a constraint to the freedom of design decisions, thus, providing
a foundation for the rationale of the design. The REST style definition can be used to de-
termine which application architectures conform to the web and those that do not. Thus,
a web application that is developed by several different teams, maintains architectural
continuity and avoids embrittlement.
After looking at some of the founding and fundamental architectural styles used, we now
look into some advanced styles. These styles tend to create hybrid style of many other
styles. Their principles have a broad coverage and are intended to be applied to software
engineering in general, rather than a specific usage. In addition, they may take into
consideration development methodologies, such as agile based methods.
3. Advanced Architectural Styles 259
1. Standardised service contract Are used for services to express their capabilities
and method of interaction.
4. Service reusability Requires that logic is broken down into agnostic functionality,
which can be reused within different contexts. This principle is dependent on the service
modelling approach used.
5. Service autonomy Means that service should control their own logic and environ-
ment. It is essential for reliable and consistent behaviour. The scope of the autonomy is
dependent on the service capability.
3.2 Microservices
The microservices architectural style is an evolution of SOA. It is generated through
experience of architects and engineers overcoming the pitfalls in the SOA style. The
argument for moving from monolithic applications to a Microservices application are the
same as for moving to SOA. As stated by Newman in [16], Microservices are software
applications with a single purpose. Newman goes on to argue that SOA lacks clarity
on service design. In particular guidance on service size and application decomposition
are two areas of ambiguity. In addition, Microservices also raise the notion of smart
endpoints and “dumb” pipes [16]. This is in response to heavy middleware such as ESB
that are more intelligent than the services.
is bringing about a loosely coupled set of software applications that can, each, be in
perpetual beta, whilst being developed by ad-hoc combinations of software developers,
in a cooperative manner.
MAS is often used to implement Game Theory algorithms [28]. MAS applications,
though, are vast from economics and social sciences to industrial automation and power
systems.
Agent architectures tend to focus on agents’ decision-making process such as Proce-
dural Reasoning Systems and Practical Reasoning [26]. MAS is more about interactions
between agents, e.g. FIPA specification etc.
4 Systems
As defined by the US Military DoD JP 1 02 a System is:
These two definitions provide a basis for understanding what makes up a system. In
this thesis, a system is a key building block for understanding an architecture. Here we
say, a system captures a fully autonomous entity with an independent set of functions,
life-cycle and purpose (objective). A system may or may not communicate and collab-
orate with other systems. However, in our case, many of the systems will need to work
collaboratively in order to achieve their own goals. Systems thinking means that when
designing a system, all aspects of the life of the system must be considered. This means
that all software and hardware aspects; such as logical, functional, structural security,
mechanical housing, spatial location, electronic and human interaction are captured un-
der a single entity. Spatial and temporal boundaries have been the intuitive delineation
lines between systems. However, as the definitions above indicate, functional boundaries
must also be considered. Add to this, managerial (change/configuration management),
political and economic boundaries, will capture strong lines of separation between sys-
tems.
5 Systems of Systems
Moving onto System of Systems extends this theory with goals and objectives that require
participation of multiple component systems to be achieved. In an SoS, the primary
modelling unit remains a single system with self-centred goals. The SoS introduces a new
264 Paper J
set of goals that are only achievable through the collaboration of participating systems.
An SoS is itself a system because it satisfies the properties of a system. However, a system
does not satisfy the properties of an SoS, therefore a (pure) system is not an SoS [31]. This
is primarily because the management and life-cycle aspects of an SoS are quite different
to those of a system. Compared with systems an SoS is much more un-predictable life
cycle that is prone to change. Any assumptions regarding the component systems of
the SoS must be moderated with fall-backs and mitigation plans. Management and
development of an SoS is also rather different to that of a system. Control over change,
budget and overall direction of a component system may not be under the control of the
SoS team. Hence, there is only limited influence over the systems team, and so careful
synchronization and analysis of change is required.
1. Stable Intermediate Forms While designing SoS, incomplete forms of the SoS
should be designed and put into action. As stated, an SoS is made up of component
systems. Utilizing a sub-set of these component systems to create a partially functioning
but stable SoS will act as a proof of concept and a platform for early learning. This
principle matches very well with concepts of Agile development and iterative design.
2. Policy Triage The SoS engineering team does not have control over the system
development or modes of operation. They must triage the situation and choose carefully
where to exert influence on systems teams and where/when to adapt the SoS to the system
choice. This is captured as over-control - leading to failure due to lack of authority, or
under-control - leading to failure due to elimination of an integrated SoS.
3. Leverage at the Interfaces The component systems of an SoS are often designed
and managed independently of each other. Furthermore, the SoS designers have limited
influence over internal architecture of the systems. This leads to SoS design having a much
greater emphasis on interfaces between systems rather than the design of component
systems.
1. Virtual Virtual SoS lack centralized management and purpose. Behaviour is not
created with intention, rather it emerges from the resulting SoS. The Internet is an
example of a virtual SoS; There is not enforcement of Internet standards and there is no
centrally agreed reason for the Internet. The Internet Engineering Task Force (IETF)
[34] must utilize principle 4 ensuring cooperation to create the Internet standards.
2. Voluntary Voluntary SoS can also be referred to Collaborative SoS. Involve cen-
trally agreed purpose of an SoS, but do not subscribe required adherence to the SoS. SoS
management does not have any coercive power on individual systems. This is perhaps
the most challenging and common form of SoS within the context of IIoT.
3. Directed In the case of directed SoS, purpose and management is fully centralized
and has significant control over component systems. The total SoS design lends itself
closely to Monolithic assumptions regarding behaviour control.
5.3 Summary
In this section the Industrial Internet of Things and some software architecture ap-
proaches were discussed. Architectural styles like SOA, MAS, Web2.0 and microservices
all have contributing principles toward a software design and development for the IIoT.
However, there are certain challenges of IIoT that can be handled by one style and not the
other. Pure SOA usage in a physical environment lacks the descriptive need for modelling
physical entities with multiple, inter-dependent services. Pure MAS has high resource
demands from nodes that are constrained in battery life and communications capability.
Web2.0 focuses predominantly on web based commerce and interaction. There is little
attention to low bandwidth or niche requirements of industrial use cases. The principles
of Microservices build up many advantages to SOA and can be useful to IIoT. They
however, lack the same expressiveness required for understanding the physical nature of
dependent services in IIoT nodes.
A consistent argument for a new architectural style is presented, based on defining
a set of principle that are able to cope with the challenges of IIoT software architecture
design. SoS is used along with SOA as the basis for modelling interacting IIoT nodes.
Functionality is spread amongst the nodes and applications emerge from the resulting
SoS. SoS is a flexible notion that lends itself well to environment with highly varying
and heterogeneous capabilities. For example, a company or society can be modelled
as a SoS with humans making up the component systems. An employee will have a
set of personal objectives and capabilities, for example, objective of earning an income
266 Paper J
4. Data modelling The intersection of different domains exacerbates the data mod-
elling issue for IIoT. Architects may need to utilize semantic web with highly typed
representations of data, or plain text /binary data over a low power wireless link. With
evolving techniques and technology this challenge is always moving. This challenge is in
semantics of data, usage of data, storage of data and serialization of data.
6. User interactions Lines of separation between computers and humans are not
clear cut. User interact with machines with augmented reality, robotics and etc. So-
lutions cannot target a single display and information must flow between augmented
reality and traditional displays seamlessly. Virtual commissioning and human-machine
co-working also require fluid movement of information.
9. Data security A big challenge with so much networked data being made avail-
able and with increasing operational dependence on access to this data. Also, due to
networks of multi-vendor applications open traditionally secure environments to sloppy
development or malicious code. The lines of attack for social engineering increases. This
challenge deals with: access control, forged credentials, social engineering and networks
of multi-vendor applications
268 Paper J
12. Development practices OT engineers are accustomed to high cost well planned
stage-gate projects. While, IT developers prefer iterative time-boxed projects. Although
there still remain differences between IT and OT, the software life-cycle is still becoming
shorter.
IIoT solutions must support evolutionary development and have fast mean time to
failure. Life-cycles of IIoT systems can vary greatly. At any given time, systems will
be at different stages in their life-cycle. The challenge for architects is to delineate
between systems so that sporadic disruptions do not interrupt overall operation of the
IIoT application.
The challenges listed here are not application requirements. They are general issues
that architects must be aware of and will influence architectural considerations even
before requirements are considered. In order to address these challenges from a software
architecture perspective, first a set of architectural principles must be defined.
Figure 3: Software architecture abstraction layers. Small changes in the principles at the core
echoes through all the layers.
manner independent of the architectural style (SOA). So for example, services, systems,
components and agents are referred to as decomposition elements.
1. Loose coupling (Adapted from SOA) Firstly, each decomposition element is inde-
pendent of the internal design of other elements. Meaning that, for example, logic, intel-
ligence, implementation technology or in-fact (internal) composition can change without
impacting other elements. This helps to improve maintainability and changeability of
the elements.
5. Specialization This principle suggests that each element should have a limited
number of concerns. The decomposition elements can be responsible for varying amounts
of functional or business logic. An extreme of this principle would be as with microservices
270 Paper J
and limit each element to a single responsibility. This principle must be balanced with
other principles such as autonomy.
6. Data at its source This principle guides the architecture to avoid centralized
data stores where possible. Data is a commodity with value. Access must be controlled
and in some cases charged. This principle undertakes to direct privacy by design as a
characteristic of the architecture prior to any development. There are mechanisms to
allow cache and proxy, whilst allowing data source origin having certain controls over the
data.
This section has presented the principles for design of a software architecture in the
context of IIoT application development. The practice of architecture in the IIoT requires
8. The Principled Decomposition 271
Autonomy A high level solution can be modelled as a black box with all capabilities
required to satisfy the requirements - with no detail on how it achieves this. This is insuf-
ficient and a developer will come up with an architecture based on limited information.
Studying the feature life-cycle, usage and spatial separations, this black box is broken
down into groups of capabilities. This first step of functional decomposition resulted in
a number of black boxes representing groups of capabilities that have similar demands
of autonomy. The next step is to look into specialization.
Specialization The architecture is now black boxes decomposed based on; life-cycle,
spatial distribution and decision making autonomy. Each black box of capabilities could
have quite different objectives or implementation requirements (i.e. legacy equipment).
Through each block box, split the capabilities based on the level of relatedness. Creating
specialization is at the architects discretion, and there is no definite way to determine
relatedness. However, where multiple processes are involved, the lines may be drawn
based on equipment scope boundaries. This will result in a higher granularity of black
boxes that are more meaningful to developers.
Data at its source The black box groupings of capabilities can now be related by data
usage and storage. The architect should guide (constrain) the solution to utilize data at
its source. This is done by introducing direct(ed) relationships between the capability
black boxes. Where a it is not possible to maintain data at the source, then additional
(non-functional) cache/proxy/archive capabilities should be added. The architecture now
has hierarchical groupings of capabilities decomposed by autonomy and specialization and
connected through directed data relationships.
First person perspective The black boxes can now be viewed as systems. Each
system has a set of capabilities that it is responsible to fulfil. By putting themselves into
the perspective of the system the architect questions how changes in the environment may
impact capabilities. Looking outward at other systems, what are my functional and data
dependencies? Do I have any fail safe contingencies or are there any mitigating factors
for dependency failure or loss? How will others make use of the capabilities I am offering?
Looking inward, what is my life cycle? What mechanisms do I have for optimizing my
life time? How can I trust other systems and how can I prove who I am? What are
my security vulnerabilities or strengths? By applying the first person perspective, the
architect will have captured capabilities as concrete service interfaces. This principle
has the added benefit that an architecture that rationalizes such considerations can be
utilized by engineering teams to identify and better guide implementation technology
choices. In addition, this step must address the security constraints, keeping data safe
and safeguarding against malicious interactions.
for SoS composition is ultimately down to these principles. The architect must the
decomposed systems to ensure they take action according to these principles.
automated configuration of the equipment and work stations. Such that domain specialist
resources can make factory floor changes with out assistance from IT resources. 2) There
is a need to make the work station able to operate with little or no factory wide IT
infrastructure. It is autonomous and decoupled from factory wide IT applications such
as MES.
9.1 Views
The Figure 5 illustrates the views as they reflect the interests and concerns of all stake-
holders. A view must capture important design decisions relevant for the stakeholders
(including the architect). Views are a critical method of creating focus on relevant
concerns. The RAMI 4.0 layers define 6 areas of concern; business, functional, data,
communication, integration and asset. Roughly speaking, IT has focused on the upper 3
layers. This is in contrast to OT that focuses on the lower 3 or 4 layers, often providing
a thin service layer for interaction with IT infrastructure. Industrial middleware [41] has
positioned itself somewhere within the middle 4 layers, generally abstracting the physical
assets and remaining flexible to business processes. Suggested here, is to capture the
different RAMI 4.0 layers as application network views. Each layer captures the point
of view distribution across relevant RAMI4.0 hierarchical components. The views in this
9. The Architectural Style 275
Business view
The business view captures concerns regarding business value
generation, regulatory compliance and domain or organization
norms.
The business view must capture the networked distribution of domain logic across
the IIoT device hierarchy. This requires a catalogue of requirements that capture value
generating activities, regulation compliance activities and any organizational norms. For
each requirement a matching capability must be identified so that a set of business ca-
pabilities are defined that will full fill all business requirements. Finally, the business
capabilities should be mapped to IIoT device hierarchy groupings. At this stage there is
no direct mapping to systems, only to the system classes (i.e. product, field device, work
centre, enterprise and etc.). This means that requirements and capabilities at the busi-
ness layer are visually allocated a specific layer of the IIoT hierarchy. The decomposition
of the capabilities with the subsequent mapping to hierarchy have applied the decompo-
sition technique defined in Section 8. This means that visualization and decomposition
of business logic leads to understanding infrastructure requirements and functional re-
quirements. Thus it is the starting point for the functional view, which must support the
business operations and objectives.
During creation of this view/form, the architect must ask themselves:
1. What are the business considerations each component group are responsible for?
2. What are the dependencies between constraints (therefore dependency between
groups)?
3. Are there any direct and indirect impact of the constraints on the overall design of
each group?
For example, a regulatory requirement may require execution of certain business op-
erations. Figure 6 captures the capability mapping required to provide regulatory com-
pliance to safety nut tightening. That is, the safety nut, controlling the wheel alignment,
must be tightened to a specific torque and follow through angle. This value must be
recorded and kept on record by the truck manufacturer. This view then shows how the
responsibility for performing the logical operations are distributed amongst the differing
hierarchical components to achieve the objective. With this view, the architect is able
to validate the business rule for wheel alignment safety nut tightening.
276 Paper J
Figure 6: The business view is made up of simple black boxes representing each layer of the IIoT
device hierarchy with the allocated business capabilities decomposed from the business concerns.
Functional view
The functional view captures concerns regarding system identi-
fication and capability assignment.
The business view captured the capabilities required to deliver value, stay within
regulations and any organizational norms. It has also allocated those capabilities to
the IIoT device hierarchy. Next the business capabilities are turned into functional
requirements. Domain knowledge is required to turn business capabilities into functional
requirements. In addition user requirements, such as interaction requirements, must
be captured. These requirements are all captured in a catalogue that will enable the
business capabilities and user interaction needs. For each catalogued requirement, a
functional capability must be defined, such that it satisfies the business capability at the
required IIoT device hierarchy level. The functional capabilities must be decomposed
according to the method in Section 8. Decomposition of the functional capabilities will
result in capability groupings that can be referred to as systems. Hence, the component
systems of the architecture have been identified, mapped to the IIoT device hierarchy
and their capability offerings/dependencies defined. The functional view has captured
the distribution and relationship of how objectives across the networked systems are
satisfied. Each system provides capabilities as services that can collaborate to complete
business objective. The systems provide and consume these services so the functional
dependencies between systems are identified as service interfaces. This is the view that
will often be used when communicating the solution back toward the customer and/or
system users.
While creating this view, the architect should be asking questions such as:
4. Are there any systems with too many dependencies (i.e. high impact if failure
occurs)?
Following on from the earlier business rule regarding safety nut tightening, the systems
and services required to satisfy the business capability are shown in Figure 7. Of course,
these functional capabilities can also be reused for other business capabilities.
Figure 7: The functional view consists of the component systems and capability services with
dependency identification.
Information view
The Information view captures the concerns of what form data
takes and meaning it has.
The functional capabilities and dependencies have created service exchange paths
between systems. It is the business capabilities and user interaction requirements that
have motivated the systems and service exchanges. The data that is generated by the
execution of a capability or required in order to perform a capability has not been defined
yet. The Information view captures what data is required/generated, what the meaning
of the data is and what format it takes.
While creating this view, the architect should be asking questions such as:
1. What does the capability do? How does its execution effect the meaning of the
information?
2. Which capability produces the information?
3. Which capability requires the information?
4. What are the dependencies between information?
278 Paper J
The proposed architectural style does not dictate the elements or methods used in
the view. Any ontological language could be used to describe the information or an
external ontology could be referred to here. It is also possible to perform conceptual
data modelling here using techniques such as entity-relationship modelling [42]. In Figure
8 an example Entity Relationship Diagram (ERD) is shown. It defines the conceptual
model of the information required to make use of the nut tightening capability. It also
defines the information generated by execution of the nut tightening capability. The
generated information is shown to relate to the product entity. This view has defined
what information is used by the functional capabilities and where the source and sinks of
the information are. Using this model developers can understand exactly the information
that must be stored for the safety nut tightening of the wheel alignment.
Figure 8: This Entity Relationship Diagram shows the information used and generated by the
Nutrunner as it executes the nut tightening capability. It also relates the results back to product
being worked upon.
Communication view
The Communications view captures concerns of system interac-
tion patterns and methods.
Until now, the individual systems have been identified. Their capabilities and service
interfaces defined. The format and meaning of information exchanged between systems
specified. But how does the information move between systems? In which direction is
service exchange initiated? The communications view captures how information is passed
between the systems.
While creating this view, the architect should be asking questions such as:
1. How does information move between systems?
9. The Architectural Style 279
Figure 9: The Communication view specifies the communications patterns and methods as a
shared bus between systems.
Integration view
The Integration view captures concerns of interfacing between
the virtual and real worlds.
Thus far, the individual systems interconnected with each other through communica-
tions buses with known information formats and capabilities. Due to the nature of IIoT
applications, many of the systems will have sensors and actuators to perform real world
280 Paper J
interactions. The integration view captures what hardware interfaces are required by the
systems in order to interact with the physical world.
While creating this view, the architect should be asking questions such as:
1. Do the systems require A2D or D2A conversion?
2. Are there any legacy interfaces to be supported (i.e. serial interfaces)?
3. What support for digital inputs/outputs are required?
4. Does the system need an integrated sensor?
Not all questions need to be answered. The architecture should show the important
constraints or provide a selection of possible constraints. This can give valuable freedom
to designers for integration selection, so this decision is at the discretion of the architect.
For example, Figure 10 shows the touch screen interface to the display system and the
legacy nutrunner equipment with a serial interface. The databases shown here could be
represented in the system view as an internal component of the system, or in this case
as an outside store with a direct interface.
Figure 10: The Integration view highlights important real world or legacy equipment integration
points.
Asset view
The Asset view captures concerns of physical world devices.
System requirements for integration and communication have been formed. The Asset
view is where architects will identify important devices for running the systems on. In
addition legacy devices are shown here with the legacy communications bus defined in
the integration view. Additionally, the asset view should show networking appliances
and network interfaces such as Ethernet, Wi-Fi, IEEE 802.15.4 or etc.
While creating this view, the architect should be asking questions such as:
1. Is the IIoT device battery powered or mains powered?
2. Is it possible for the IIoT device to utilize power harvesting?
3. How secure must the system be?
4. How much cryptographic processing will be required?
9. The Architectural Style 281
Figure 11: The Asset view is communicates hardware specifics - this view can be very detailed,
or leave more flexibility to designers.
System view
The System view captures concerns of a single system with all
the software layers present.
Thus far the architecture style has focused on network views of the IIoT application.
This is no accident, as IIoT applications are highly distributed. For each of the IIoT sys-
tems defined so far this view must be developed. The system view is an implementation
view that can be utilized for estimation and development. Therefore, it will be utilized by
282 Paper J
software and hardware engineers in building the system. It must capture the system con-
cerns related to: asset, integration, communication, information, function and business.
This view will capture the system architecture, it could utilize a object oriented, layered,
pipe and filter, event driven or agent architectures etc. It it could be some suitable com-
bination. For example, a suggestion could be use a hybrid of two styles; 1) a layered
style captures the asset, integration and communication concerns and 2) an object style
captures the information, function and business concerns. Hardware Abstraction Layers
(HAL) create a platform upon which high layer objects can be manipulated. The layered
style indicates clear dependencies and responsibilities. Object oriented modelling pro-
vides flexible definitions and changing interactions between components. This is shown
in Figure 12.
Figure 12: The System view is utilized by the architect to communicate a model of the system
that can guide implementation.
10 Conclusion
The IIoT is a sphere of technological applications that encompasses a vast variety of
requirements and solutions. It is a cross disciplinary domain requiring knowledge and/or
awareness of materials, physics, electronics, embedded software, communications/net-
working, security, enterprise software and data storage/analytics. Therefore, architect-
ing an IIoT solution cannot rely on pure computer science nor on engineering. The
architect must rationalize design decisions based a principled approach. These princi-
ples are autonomy, specialization, data at its source and first class components. These
are combined with traditional SOA principles of loose coupling, searchability and run-
time binding. Through these 6 principles architects are able to decompose application
capabilities into interacting systems of appropriate size. Thus, the principles form the
guide line for an architectural style that utilizes SoS, SOA and the RAMI 4.0 model
to guide the capability decomposition process. First person, autonomous components
tend to show resilience to faults and share less internal state. Component specialization
supports evolutionary environment and reduce their impact on other components when
changed. Data at the source exhibit desire to retain ownership of data and encourage
References 283
less pooling of data, thereby increasing security. Furthermore, data at the source reduces
the semantic overhead of describing full information context when storing data.
Acknowledgement
The authors would like to thank all partners of the Far-Edge, Arrowhead and Productive
for constructive discussions regarding application architecture development.
References
[1] M. Hölzl, A. Rauschmayer, and M. Wirsing, “Software-intensive systems and new
computing paradigms,” M. Wirsing, J.-P. Banâtre, M. Hölzl, and A. Rauschmayer,
Eds. Berlin, Heidelberg: Springer-Verlag, 2008, ch. Engineering of Software-
Intensive Systems: State of the Art and Research Challenges, pp. 1–44. [Online].
Available: https://doi.org/10.1007/978-3-540-89437-7 1
[3] “The ’only’ coke machine on the internet,” Carnegie Melon University. [Online].
Available: https://www.cs.cmu.edu/∼coke/history long.txt
284 Paper J
[4] B. Scholten, The road to integration : a guide to applying the ISA-95 standard in
manufacturing. Research Triangle Park NC : ISA, 2007.
[5] A. Cockburn and J. Highsmith, “Agile software development, the people factor,”
Computer, vol. 34, no. 11, pp. 131–133, Nov 2001.
[6] J. Delsing, J. Eliasson, J. van Deventer, H. Derhamy, and P. Varga, “Enabling iot
automation using local clouds,” in 2016 IEEE 3rd World Forum on Internet of
Things (WF-IoT), Dec 2016, pp. 502–507.
[8] F. Bonomi, R. Milito, J. Zhu, and S. Addepalli, “Fog computing and its role in the
internet of things,” in Proc SIGCOMM, 2012.
[9] E. A. Lee, “Cyber physical systems: Design challenges,” in 2008 11th IEEE In-
ternational Symposium on Object and Component-Oriented Real-Time Distributed
Computing (ISORC), May 2008, pp. 363–369.
[10] F. P. Brooks, Jr., “The mythical man-month,” SIGPLAN Not., vol. 10, no. 6, pp.
193–, Apr. 1975. [Online]. Available: http://doi.acm.org/10.1145/390016.808439
[12] H. P. Breivold, “A survey and analysis of reference architectures for the internet-of-
things,” ICSEA 2017, p. 143, 2017.
[13] P. B. Kruchten, “The 4+1 view model of architecture,” IEEE Software, vol. 12,
no. 6, pp. 42–50, Nov 1995.
[15] T. Erl, SOA Principles of Service Design (The Prentice Hall Service-Oriented Com-
puting Series from Thomas Erl). Upper Saddle River, NJ, USA: Prentice Hall PTR,
2007.
[17] “The single responsibility principle - 8th light.” [Online]. Available: https:
//8thlight.com/blog/uncle-bob/2014/05/08/SingleReponsibilityPrinciple.html
[18] T. O’Reilly and J. Battelle, “Web squared: Web 2.0 five years on,” Tech. Rep., 2009.
[19] T. O’Reilly, What is Web 2.0? Design Patterns and Business Models for the Next
Generation of Software. O’Reilly Media, September 2009.
[20] D. Merrill, “Mashups: The new breed of web app,” Tech. Rep., 2006.
[21] “Real-time twitter trending hashtags and topics.” [Online]. Available: https:
//www.trendsmap.com/
[24] “Ifttt helps your apps and devices work together.” [Online]. Available:
https://ifttt.com/
[26] M. Wooldridge, Introduction to Multiagent Systems. New York, NY, USA: John
Wiley & Sons, Inc., 2001.
[28] K. L.-B. Yoav Shoham, Multiagent Systems - Algorithmic, Game-Theoretic and Log-
ical Foundations. Cambridge University Press, 2009.
[30] ISO/IEC/(IEEE), “ISO/IEC 42010 (IEEE Std) 1471-2000 : Systems and Software
engineering - Recomended practice for architectural description of software-intensive
systems,” 2007.
[31] “Systems engineering guide for systems of systems,” Washington, DC, USA, Tech.
Rep., 2008.
[35] P. Reed, “Reference architecture: The best of best practices,” Tech. Rep., 2002.
[36] M. Bell, SOA Modeling Patterns for Service Oriented Discovery and Analysis. Wiley
Publishing, 2010.
[37] T. Erl, SOA Design Patterns, 1st ed. Upper Saddle River, NJ, USA: Prentice Hall
PTR, 2009.
[40] J. Delsing, Ed., Arrowhead Framework: IoT Automation, Devices, and Maintenance.
CRC Press, 12 2016. [Online]. Available: http://amazon.com/o/ASIN/1498756751/