Full Text Thesis
Full Text Thesis
By
Falah Alshahrany
June 2016
i
Abstract
The real time monitoring of environment context aware activities, based on distributed
detection, is becoming a standard in public safety and service delivery in a wide range of
domains (child and elderly care and supervision, logistics, circulation, and other). The safety
of people, goods and premises depends on the prompt immediate reaction to potential hazards
identified in real time, at an early stage to engage appropriate control actions. Effective
emergency response can be supported only by available and acquired expertise or elaborate
collaborative knowledge in the domain of distributed detection that include indoor sensing,
tracking and localizing.
This research proposes a hybrid conceptual multi-agent framework for the acquisition of
collaborative knowledge in dynamic complex context aware environments for distributed
detection. This framework has been applied for the design and development of a hybrid
intelligent multi-agent decision system (HIDSS) that supports a decentralized active sensing,
tracking and localizing strategy, and the deployment and configuration of smart detection
devices associated to active sensor nodes wirelessly connected in a network topology to
configure, deploy and control ad hoc wireless sensor networks (WSNs). This system, which
is based on the interactive use of data, models and knowledge base, has been implemented to
support fire detection and control access fusion functions aimed at elaborating:
An integrated data model, grouping the building information data and WSN-RFID
database, composed of the network configuration and captured data,
A multi-criteria decision making model for generic detection devices distribution, ad hoc
WSNs configuration, clustering and deployment, and
Predictive data models for evacuation planning, and fire and evacuation simulation.
An evaluation of the system prototype has been carried out to enrich information and
knowledge fusion requirements and show the scope of the concepts used in data and process
i
modelling. It has shown the practicability of hybrid solutions grouping generic homogeneous
smart detection devices enhanced by heterogeneous support devices in their deployment,
forming ad hoc networks that integrate WSNs and radio frequency identification (RFID)
technology.
The novelty in this work is the web-based support system architecture proposed in this
framework that is based on the use of intelligent agent modelling and multi-agent systems,
and the decoupling of the processes supporting the multi-sensor data fusion from those
supporting different context applications. Although this decoupling is essential to
appropriately distribute the different fusion functions, the integration of several dimensions of
policy settings for the modelling of knowledge processes, and intelligent and pro-active
decision making activities, requires the organisation of interactive fusion functions deployed
upstream to a safety and emergency response.
ii
Publication List
Conference Publications:
1. F. Alshahrany, H. Zedan, and I. Moualek, " A conceptual framework for Small WSN
Configuration using Intelligent Decision Support Systems for Emergency
Preparedness and Response," in Innovative Computing Technology (INTECH), Third
International Conference, London, 2013.
Journal Publications
1. F. Alshahrany, H. Zedan, and I. Moualek, " A conceptual framework for Small WSN
Configuration using Intelligent Decision Support Systems for Emergency
Preparedness and Response," IEEE,2013, DOI: 10.1109/INTECH.2013.6653726
iii
Poster Publications:
2. Falah Alshahrany and Maysam Abbod, "Using hybrid WSN approach and intelligent
decision support systems for fire detection and mitigation in smart buildings" the
8th Saudi Students Conference (SSC2015), 31st Jan _1st Feb 2015, London.
iv
Dedication
My special thanks go to:
o The almighty Allah, who in his infinite mercy gave me the grace,
strength, health, endurance and foresight to undertake this research,
o My beloved father (Allah's Mercy) on his soul passed away
during the course of this research,
v
Special Acknowledgment
vi
Acknowledgements
I express my huge thanks to the Saudi government, represented by the Ministry of Interior
and General Directorate of Civil Defence, for offering me a research scholarship, and for
giving me previously the chance to get the highest degree in information management. May
Allah help me to repay and serve my country Saudi Arabia and contribute to its development!
I would like to extend my heartfelt gratitude to Prof. H. Zedan and all my friends in the UK
and in Saudi Arabia for their assistance and encouragement during my PhD study.
My sincere thanks and gratitude go out to my extended family in Kamees Mushet with a very
special thanks to my brothers, Mohammed, Turki, Nasser, Fayez, Fahad, and Badah. I owe a
great deal to them for their hospitality, encouragement and moral support. Also, my deepest
thanks go to my sisters Rajaha, Sabah, Twfeq, Dalal, Gazel, Naefa, Mobarka and Hana for
their love, consideration, and constant care.
Finally, I am greatly indebted to Abdullah mother and Husam mother for their understanding
when things were below expectation. Their great support and encouragement are the
benchmarks for the success achieved in my life. My love is expanding to my lovely sons,
Abdullah, Mohammed, Ail and Husam, and my lovely daughters Khloud, Amjad and Abrar.
vii
Declaration
i. The thesis comprises only my original work towards the Ph.D. except
where indicated.
ii. Due acknowledgement has been made in the text to all other material
used.
Falah Alshahrany
viii
Table of Content
Abstract .................................................................................................................................................. i
Publication List ................................................................................................................................... iii
Dedication ............................................................................................................................................. v
Special Acknowledgment..................................................................................................................... vi
Acknowledgements ............................................................................................................................. vii
Declaration.......................................................................................................................................... viii
Table of Content................................................................................................................................... ix
List of Figures................................................................................................................................... xviii
List of Tables ...................................................................................................................................... xxi
List of Abbreviations ........................................................................................................................ xxii
Chapter 1: Introduction ....................................................................................................................... 1
1.1 Introduction ......................................................................................................................... 1
ix
2.3.3 Network Architectures .................................................................................................. 18
2.3.4 Multi-Function Sensors Nodes...................................................................................... 19
2.3.5 Multipurpose WSNs ...................................................................................................... 19
2.4 Radio Frequency Identification ......................................................................................... 20
2.4.1.3 Data-On-Network...................................................................................................... 22
x
2.7.2 Multi-Agent Systems and Agent-Based Models ........................................................... 38
2.7.2.1 Agent Typology ........................................................................................................ 38
xi
2.8.4 Web-Based Learning Support Systems ......................................................................... 54
2.8.5 Web-Based Service Management ................................................................................. 54
2.8.6 Web-Based Knowledge Management ........................................................................... 55
2.8.7 Self-Organizing Support Systems ................................................................................. 55
2.9 Decision Making ............................................................................................................... 56
xii
Chapter 4: The Generic Design ......................................................................................................... 90
4.1 Introduction ....................................................................................................................... 90
4.2 Motivations........................................................................................................................ 91
4.3.3.3. The Web-Based Group Decision Support System Capabilities .............................. 103
4.6.1. Indoor Distributed Detection Using Knowledge Based Design and Ad hoc Integrated
WSN-RFID...................................................................................................................... 113
4.6.2. Matching Sensors Using a Knowledge-Based System ............................................... 114
4.6.2.1. Sensors .................................................................................................................... 114
xiii
4.6.2.3. The Sensor Mission Matching ................................................................................ 115
xiv
5.2.3. Physical Entities .......................................................................................................... 133
5.2.4. Devices Support .......................................................................................................... 133
5.2.5. WSN Database ............................................................................................................ 134
5.2.6. Link between the HIDSS architecture and the case study .......................................... 134
5.3. The Functional Processes ................................................................................................ 135
5.3.1. The Sensor, RFID tag, Heterogeneous Devices and Gateway- Router Databases.... 136
5.3.2. The Sensor Node Design and Database ...................................................................... 138
5.3.2.1. The Sensor Node Composition ............................................................................... 138
5.3.2.2. The Sensor Node Specifications and Resources Calculation Models ..................... 139
xv
5.3.9.2. Nodes Connection Types ........................................................................................ 162
xvi
6.4.4.1 Data Volumes, Visualisation and Backup............................................................... 194
xvii
List of Figures
Figure 1.1: Research problem ............................................................................................................... 4
Figure 3.1: Rntegrated conceptual framework for web-based hybrid intelligent decision support
systems ............................................................................................................................... 62
Figure 4.1: Service interoperability based on generic and recursive design ........................................ 92
Figure 4.3 Hybrid intelligent web-based service decision support system architecture ...................... 94
xviii
Figure 4.10: Generic active sensor node functional and system architecture .................................... 120
Figure 5.1: Sensor and data fusion integration process ...................................................................... 131
Figure 5.2: Functional processes based on the entity model .............................................................. 136
Figure 5.7: Example of virtual building layout, encoding systems and data ..................................... 144
Figure 5.10: Geometric models for homogeneous devices allocation ............................................... 147
Figure 5.15: Sensor node pre-configured reading interval specification ......................................... 155
Figure 5.16: Sensing spacing distance segmentation to form sensor nodes layers ............................ 156
Figure 5.17: Spacing distance and sensor node substitution .............................................................. 156
Figure 5.18: Senor node and sensor status process model setting ..................................................... 157
xix
Figure 5.20: Sensor node clusters aggregation . ................................................................................. 161
Figure 5.24: Control access and tracking of people and objects ........................................................ 165
Figure 5.25: Example of screen from the virtual control room .......................................................... 166
Figure 5.28: Predictive data for fire alarm simulation ....................................................................... 169
Figure 6.4: Pre-defined exits points for each room ............................................................................ 188
Figure 6.10: Heterogeneous devices specifications, and navigation and flow model. ...................................... 199
xx
List of Tables
Table 6.1: Relation between spacing distance and number of sensor nodes ...................................... 182
xxi
List of Abbreviations
ABM Agent-Based Modelling
AHP Analytic Hierarchical Process
AIDC Automatic Identification Data Collection
CC Cloud Computing
CLDs Causal Loop Diagrams
CM Cognitive Maps
CN Coordinator Node
CVWE Collaborative Virtual Work Environment
DASH Desktop Management
DCOM Distributed Component Object Module
DoS Denial-of-Service
DPM Dynamic Power Management
DSB Domain Service Bus
DSS Decision Support Systems
EBL Enquiry-Based Learning
EDN End Device Node
EEPROM Electrically Erasable Programmable Read-Only Memory
EP Ending Point
ESUP Energy Supply Use Points
ETL Extraction-Transformation-and Loading
FODA Feature Oriented Domain Analysis
G Gateway
GDSS Group Decision Support Systems
GSS Group Support Systems
HIDSS Hybrid Intelligent Decision Support System
HSB Health Service Base
HTTP Hyper Text Transfer Protocol
IC Integrated Circuit
ISR Intelligence, Surveillance and Reconnaissance
KB Knowledge-Based
KIIDSS by Knowledge-Intensive Intelligent Decision Support Systems
KM Knowledge Management
LMS Learning Management Systems
LNA Low Noise Amplifier
MADM Multiple Attribute Decision-Making
MAS Multi-Agent System
MB Model-Based
MDM Master Data Management
NetMan Network Management
NLDP Node Location Distribution Patterns
OLAP On-Line Analytical Processing
OLTP On-Line Transactional Processing
PA Power Amplifier
PDAs Personal Digital Assistants
PR Procedural Reasoning
QoD Quality of Data
R Router
xxii
RBR Ruled-Based Reasoning
RFID Radio Frequency IDentification
RN Router Node
SC Soft Computing
SD Spacing Distance
Sl Segment Length
SM Spatial Multiplexing
SMI Storage Management
SNC Sensor Nodes Clusters
SOA Service-Oriented Architecture
SOAP Simple Object Access Protocol
SP Starting Point
SSD Sensing Spacing Distance
SSN Sensor Semantic Network
UART Universal Asynchronous Receiver/Transmitter
UPC Unique Product Code
VMAN Virtualization Management
WBKBS Web-Based Knowledge-Based System
WBKBSS Web-Based Knowledge Support System
WBSS Web-Based Support Systems
WSAM Web Services Architecture Model
WSDL Web Services Description Language
WSDL Web Service Description Language
WSMN Wireless Sensor Multimedia Network
WSN Wireless Sensor Network
xxiii
Chapter 1: Introduction
1.1 Introduction
The emerging field of pervasive computing is imposing new research challenges for the
generic design of context aware computer systems needed to implement intelligent solutions
for the configuration, support and effective use of smart spaces characterised by their
location, content, and environmental and data-aware attributes. These challenges concern the
validity, durability and quality of solutions that integrate varied individual and group
knowledge activities, supported by an increasing complex seamless interaction of people,
smart devices, intelligent agents and services interoperating in these smart places and
configured in information and knowledge context applications. The context aware property of
these systems which originated from the emerging ubiquitous or pervasive computing, relates
to linking changes in the environment with computer systems; these changes being induced
by the dynamic behaviour of people, devices and software agents. However, Textaware
capabilities dealing with instant text in terms of very fast entry text, is not considered in this
research.
1
1.2 Research Motivations
The study of several problems that include real-time predictions and decision making,
maximizing the scope of multi-agent interaction, aggregation of decision making models,
integrated web-based services composed of intelligent agents and smart devices, is the focus
point of the new generation of intelligent solutions and systems successfully integrating
communication, information, computing and other advanced technologies.
Of great interest in the research domain considered, is the development of advanced hardware
and software intelligent agents, in the sense of pervasive computing that extends further the
multi-agent interaction scope. The involvement of humans in knowledge elicitation,
discovery and extraction has led to their incorporation as additional agents in the multi-agent
framework. They are represented by individual and collective processes describing their roles
and functions. Their involvement poses the problem of the representation of cognitive
differences reflected in individual agent roles and behaviours that might affect their group
interaction.
Interoperable dynamic web services of a good quality service level enabling the internet to
play a preponderant role in the design and implementation of self-turning distributed multi-
agent systems, is an essential aspect of edge computing. The agent coordination required to
facilitate the interaction within the system is supported by the hybrid support service
composition rules which structure the agent service configuration depending on the agents
roles, behaviours, and influence on each other.
The study of these problems is based on the modelling of intelligent agents organised as
hierarchical collections of knowledge functions identified, analysed, structured and
configured in a programmable coordination and cooperation constructed as multi-agent
systems. Such systems support the agent interaction and exploration of the knowledge
domain following the strategy of decoupling the data capture and processing from
information and knowledge fusion.
o Heterogeneous raw soft and hard big data from different sources.
2
o Intelligent agent modelling, interaction and distributed multi-agent systems,
and self-turning multi-distributed systems.
The hybrid intelligent decision system should support the following features:
3
aware real-time data, information and knowledge fusion, and multi-agent interaction based on
collaborative decision making.
The complexity in designing and programming such systems involves the exploration of
several research domains for the understanding of the development of context modelling
concepts needed to support data-aware context capture and processing, and information and
knowledge fusion. These domains are examined to elicit the key requirements for intelligent
agents to reason about information context, interact with each other to enable knowledge
sharing in open and dynamic information context environments.
The computer hardware dimension, platforms and the technical services supporting the
business processes implementing the knowledge functions are not included in this research to
limit the diversity of research concerns.
4
A good understanding of indoor sensing and monitoring requirements, in the context
of surveillance activities as part of civil defence,
The implementation of a case study to validate the concepts developed in this research
that include:
o Predictive data for evacuation planning, fire detection and evacuation simulation.
Integrates the three domains of the study: problem analysis, solution design and
system development, and
The constructive method used in this research methodology, has focused on the use of:
5
A hybrid approach to compose entities at different conceptual levels,
Knowledge processes modelling and specifications derivation for the solution design,
Different classes of support capabilities for the support system structure, and
Real time predictive data and decisions for the system outputs.
The state of the art of surveillance and indoor sensing, WSNs, RFID, WSN and RFID
integration, and hybrid intelligent decision support systems that include intelligent
agents and agent modelling, multi-agent systems, decision support systems and group
decision making.
6
The integration of WSN and RFID technologies.
The design and implementation of the HIDSS, as the pilot system supporting the case
study.
The testing of the system prototype requires a case study aimed at designing an integrated
WSN-RFID, based on a virtual building layout, to support:
a) Separating the capture and communication of data and its processing performed at
both the network level and the client system, and
b) Turning the real time data into knowledge, and decisions using a hybrid intelligent
decision support system to model the integration of predictive data, knowledge
and explanations.
2) The optimization of WSN issues to facilitate intelligent real time decision making in
disaster management, mainly:
7
a) The distinctive use and importance between heterogeneous and homogeneous
devices for integrated and aggregated decisions,
b) The importance of local data processing at the node level (in-networking), and its
implication on the WSN performance in terms of network congestion avoidance,
e) The evaluation of the ad hoc WSN and WSN-RFID integration design in terms of:
Configuration, reconfiguration, optimal deployment, and
The generic design of surveillance domain contexts applications using the same
integrated computer based support for real time big data capture, and predictions and
decision making,
8
The optimal sensing coverage,
The decoupling of fusion functions separating the processing of data from the
elaboration of information and knowledge.
The proposed generic recursive design conceptual framework is the research contribution in
the following knowledge areas:
Enhancement of the integration of RFID and WSN technologies: The generic design
conceptual framework presented in this research is a contribution to the enhancement of the
integration of RFID and WSN technologies to extend ad-hoc wireless sensor networks. Such
configurations which are supported by hybrid intelligent decision support systems, contribute
to better configure and manage ad-hoc networks, and extend identification, tracking and
sensing capabilities required for intelligent monitoring of activities and interaction support to
people in emergency response when evacuating after a hazard occurrence. The scope of this
9
contribution includes enabling these two technologies to extend the penetration of pervasive
computing in every domain of context aware environment, and reducing the boundaries of
informal big data and decision making processes to reflect real time situations and
instantaneous decision making.
Service oriented architecture integration: This contribution is also in the domain of service
oriented architecture, integrating different computing techniques to support the service
provision and composition that requires the development of intelligent agents including both
intelligent devices and software agents. These intelligent agents are the vectors of the
collaborative knowledge discovery and computational intelligence required by adaptive and
flexible decision models for the processing of context-aware data using the mutual benefits of
the varied service access devices. They integrate intelligence and generate a variety of
solutions into decision processes using predictive analytics automation.
Distributed detection based on small ad hoc networks: The major benefit of this framework
is the control of distributed small ad hoc networks of activity surveillance in a fully
automated setting required for the optimal use of emergency response resources. The use of
both physical and logical clustering levels procures a flexible and adaptable policy making
support to various organizational scenarios for different types of emergency responses.
10
this solution, a conceptual framework is proposed for the knowledge domain study, the
elicitation of domain requirements and the specifications for the generic integrated design of
a hybrid intelligent decision system to support sensing-monitoring knowledge domain
analysis, sensor node design, integrated WSN-RFID configuration and deployment, and
context aware real-time data capture and sensor node and WSN data fusion. These
requirements which include indoor sensing, localising and tracking of people and goods, are
used for the homogenous and heterogeneous sensor node specifications.
Results from the case study are finally presented in Chapter 6 to show the practicality of the
conceptual framework, and illustrate the implementation of the solution in emergency
preparedness for fire detection.
This research focuses on hybrid intelligent context aware systems that can provide a reliable
and responsive support to knowledge activities. The complexity in designing and
programming such systems involves the exploration of several research domains to
understand the development of context modelling concepts needed to identify, structure and
deploy data-aware context capture and processing, and information and knowledge fusion.
These domains are examined to elicit the key requirements for intelligent agents to reason
about information context, interact with each other to enable knowledge sharing in open and
dynamic information context environments.
The computer hardware dimension, platforms and the technical services supporting the
business processes implementing the knowledge functions are not included in this research to
limit the diversity of research concerns.
11
Chapter 2: Literature Review
2.1 Introduction
This research aims at studying the design requirements and specifications for a real time
decision surveillance system for detecting and tracking multiple people and monitoring their
activities in an indoor environment. This monitoring includes analysing events such as
interaction between people and objects, management access control, fire detection and people
and objects evacuation.
Problems inherent to the capture of real time context environment data required for intelligent
instantaneous decision making are examined in the light of the new hardware and software
developments, integrating a panoply of technologies which include WSN, RFID, smart
detectors, intelligent agents, web-based services, multi-agents distributed architectures and
hybrid intelligent decision support systems to translate sensing and identification activities
into services.
The requirements elicitation for a cooperative control model which involves a dynamic
configuration of decision-making components is an essential step within the design process of
hybrid intelligent decision support systems in the domain of sensing and monitoring. These
components with limited processing capabilities, locally sensed information, and limited
inter-component communications, are associated to a wide variety of homogenous and
heterogeneous devices, basic or smart or intelligent, all interconnected in panoply of
networks.
Of great importance in the design process of hybrid intelligent decision support systems are
the components of the decision making process needed to support automatically a cooperative
model aimed at processing the events inherent to the configuration and management of the
network and its environment. These components are the problem definition, the design of the
solution, the elaboration of the decision alternatives, and the evaluation of the decisions
outcomes.
The participation of all these elements is determinant in the contribution to the collective goal
of a controlled environment in terms of network congestion control and routing. The resulting
global decision making process which requires a complex management, is characterised by
the distributed nature of data storage and processing, sensing, and actuation within the
12
network, and also the necessary back end support structured in a distributed multi-agent
systems architecture and delivered in the form of cloud web-based composite services and
applications.
This global problem covers complex networks of interacting intelligent agents, dynamic
systems, and hybrid humans-machines appearing in broad applications in scientific,
engineering, biological, environmental and social systems supporting research and real life
activities.
2.2 Surveillance
Surveillance is the activity of monitoring environmental changes and a panoply of processes
showing the behaviour of entities (people, goods, species and elements of the natural
environment) interacting over a period of time in a defined space [1]. There are several types
of surveillance, and among the main ones are:
This research is concerned with indoor spatially distributed sensor nodes wirelessly
connecting integrated devices containing sensors and RFID tags and readers to
monitor physical and environmental conditions, and process captured real time data in the ad
hoc network and/or a main location. This type of surveillance is important for the activity of
monitoring environmental changes that is necessary to support the detection and mitigation of
events causing a panoply of damages to people and the environment, and prevent these
potential hazards to occur.
Surveillance requires adequate resources which result from diverse technologies that are
developing rapidly and generally combined with high-tech materials for use in every aspect
of daily life. These are changing our approach to surveillance solutions to gather intelligence
in several security domains: military, border, retail, and residential security, by governments
and corporations. This work focuses on the context of hazard assessment and emergency
response.
13
2.2.1 Surveillance Definition
The surveillance literature is abundant, and this work is limited only to the configuration of
smart devices to design robust and effective surveillance and monitoring solutions for
domestic and corporate applications. It focuses on how technology has changed our approach
to real time information and intelligence gathering and how it is used instantaneously to react
to environmental changes.
Surveillance technologies have changed the way surveillance is perceived in ensuring people
and goods safety and security, shifting from the individual responsibility to the corporative
duty and responsibility in the health and safety management to curtail illegal or intrusive
behaviour against others as mandated by the law.
Rapid advances in digital and communications technologies have made a wide range of
theoretical capabilities practical with sensing and data processing. Tremendous research and
development work has been undertaken over the last two decades on pervasive ubiquitous
communication and computing systems, suggesting more elaborated physical configurations
of a very fast growing number of varied objects of life, wirelessly connected, precisely
identified and tracked or located, with a log of their behavioural changes. This has resulted in
significant changes in network system architectures being needed to support a virtually
14
unlimited application potential that includes predominantly surveillance, monitoring and
assessment.
This research focuses on multimodal surveillances technologies and systems that have been
developed for the perception of people, activities and their interactions, involving a varied
range of domains for surveillance data, including the automatic detection and track keeping
of faces visible in a video sequence, establishing their positions and movements. Some
technologies are based on the use of a 3D tracking audio system that employs audio and
video sensors, integrating miniaturization and emerging wireless systems, creating an
increasing number of sophisticated devices [6]. Surveillance resources include smart devices
that aid to effectively capture, process and assess information in a data aware context using
intelligent configurations and mechanisms.
Surveillance and monitoring are used for a variety of applications in the dynamically
changing civilian, industrial and military environments. This has led to a great demand for a
wide range of innovative elaborate sensors requiring sensing data and rules for their location,
deployment and connection to form adaptive sensor configurations. The design of support to
surveillance and monitoring activities involve cutting-edge technologies, which include
knowledge-based (KB) signal and data processing, model-based (MB) sensing rules and
surveillance models, waveform diversity, wireless networking, mobile and smart devices,
advanced computer architectures, and modelling and supporting software languages [7].
Knowledge-based techniques have been used to support sensing and data processing within
and between platforms of sensors and communication systems and the resulting activities in
terms of surveillance, monitoring imaging and communications within the networks
environment are similarly supported. In network configurations, sensors cooperate with other
users and sensors, sharing information and data, and their performance can be enhanced by
changing their configuration as the environment changes [8, 9].
Surveillance is very context sensitive and requires planning strategies, data analysis, and
accurate interpretation of the results.
15
Monitoring differentiates in terms of framework between policy-based and targeted
monitoring. Generated events against defined security policies require analysis in the
environment context, using monitoring resources that condition the monitoring system.
Concrete and precise acceptable behaviours and/or standards are coded as security policies
defined to support monitoring procedures, and require evaluating their compliance in the
context of surveillance management.
All companies and organisations are bound by security legislation of their activities translated
into policies. Two types of policies are used for surveillance monitoring: external policies
which deal with regulatory compliance consisting of adherence to externally enforced
controls, and internal policies which deal with internally implemented controls and control
the employee’s security. Regulatory compliance covers all security aspects which require real
time verifiable compliance with sets of best practice elaborated using control objectives in the
context of the configuration control monitoring to set up the acceptable behaviours and/or
standards. The development of monitoring procedures is required to maintain compliance,
ensuring that changes operated on the surveillance resources are updated in the surveillance
control system, and also privacy of personal information is protected [10]. The violation of
standards requires reconciling detected changes to critical systems values for each standard
attribute with the records in the configuration control repository. The surveillance resources
logs are an essential part of the configuration control system.
Surveillance results in intelligence gathering and requires among other monitoring resources,
different devices depending on the activities to be watched over. These distributed devices
grouped in a network configuration could be employed to monitor physical and
environmental conditions in real time, and used in the control of instruments to provide
efficient reliable communications with the network.
16
Surveillance activities and devices are an extensive domain characterised by the integration
of panoply of technologies used in the design of smart surveillance, monitoring and tracking
devices and accessories. Their diversity has made them become an intrinsic part of domestic,
governmental, corporate, retail, and residential security. These devices which are fixed or
portable are used in a wide range of field endeavour to continuously extend the limits of real-
time applications and knowledge engineering needed to support the globalisation of
geographical and socioeconomic information. The access to this information and its
dissemination is enhanced by the increasing availability of more elaborated web services
supported by multi-distributed agent systems configured with satellites and network channels
[11]. These services support intelligent domestic and corporate applications, search and
rescue operations, people and objects tracking and monitoring.
It has been established in the context of the Actor Network Theory [12] that agencies exist
only relationally in and through process networks involving close interaction of humans and
intelligent agents, including both smart surveillance and monitoring devices and software
agents.
WSNs support wireless communication, coordinated sensor nodes operations and reporting to
sink(s), maintaining energy efficient control in all WSN protocols... They are used as
17
effective measurement tools to observe the environment, and make decisions to enhance the
processes of monitoring environmental changes or exercise some control on the surrounding
environment by responding to these changes. The decision making aspect can be included
inside the network or left outside.
The most remarkable development has resulted in the emergence of more elaborate wireless
networks: wireless LANs (WLANs) and mobile ad hoc networks (MANets) where IEEE
802.11 provides full scale connectivity, and small and low-cost computation and
communication devices called sensor nodes to compose wireless sensor networks (WSN)
[14]. These devices, which originally had a very basic sensing role, enhance their catalyst
role for a major change in how we communicate and interact with the environment, mainly
when real time decisions require real time data in a context aware environment. They are
grouped and configured in networks organising the cooperation among composed nodes
grouped into clusters to deliver real-time sensed data for analysis and measurement. Although
the capability of each individual sensor node is limited in terms of storage capacity and
processing capabilities, and energy power, the aggregate performance of the entire network
with a wide range of multifunctional wireless sensor nodes, with more elaborate sensing,
wireless communications and computation capabilities is sufficient for the support of a
profusion of applications [15].
18
2.3.4 Multi-Function Sensors Nodes
Sensor nodes forming WSNs, self-organise appropriate network configurations after their
initial deployment regardless of the dispatching method used: planned or typically ad hoc.
They create multi-hop connections between themselves. They also contain on-board sensors
aimed at sensing, i.e. collecting environmental data, including acoustic, seismic, infrared or
magnetic information about the environment, using either continuous or event driven working
modes. Additional location and positioning information can also be obtained through the
incorporation of global positioning devices or using local positioning algorithms and the
computing devices inside the network [15].
On-board sensors are located in a sensor module which provides a plurality of parameter
sensors and can interface with external control operations of one or more processor control
systems located inside or outside the network. The sensor node is equipped with single or
multiple integrated circuit boards integrating embedded microprocessors, radio receivers,
RFID tags, and power components that enable sensing, computing, communication, and
actuation. Additionally, a storage capacity is included to contain internal configuration and
computational rules, and data collected by the sensors when it is not relayed in real time to
processing devices inside and outside the network. These capabilities are needed for the
dynamic access by others sensor nodes and the back end system queries from users [17].
19
The concept of scoping as a middleware building block and abstraction layer for these tasks
in the WSN infrastructure has been used in the design of modular architecture in order to
meet the requirements of multi-purpose WSNs [19]. Sensor nodes can be pre-programmed
for a number of roles with associated tasks, and their roles selected following event
conditions at runtime, ensuring to reduce or eliminate the proliferation of components with
functional overlap. This has opened a new research direction for the design of middleware for
resource sharing in multi-purpose WSNs [20] to meet the varying Quality of Data (QoD)
requirements of the multiple concurrent applications, and improve configuration, adaptability
and customize-ability of WSNs [21].
The first RFID had emerged in the 1940s as a spying device energized and activated by
waves from an external source [24] before a passive radio transponder with memory was
presented as the first RFID ancestor in the 1970s, with a 16 bit memory used as a toll device
[25] .The first modern RFID system emerged a decade later [26].
The radio wave development has led to the design of Radio Detection And Ranging (Radar)
on the principle of radio waves reflect off an object, enabling their range, height, and bearing
to be determined. The radio wave, communication and integrated circuit technologies
developed further to create the transponder, a device that emits an identifying signal in
response to an interrogating received signal, to provide real-time monitoring and
identification of mobile objects. This enabled later the development of RFID systems based
on far field systems. RFID continues to play a vital role in the technological revolution along
with WSNs, smart mobile devices, web-based services and other technologies to gradually
inter-connect the real world [27].
20
The RFID identification process is based on the use of a two-way radio transmitter-receivers
called interrogators or readers activated by the presence of a magnetic field to send a signal to
the tag or label and read its response. A RFID system is made up of three components: RFID
readers or interrogators, RFID tags, and a backend system which is connected to a database
containing information about tagged objects.
RFID supports the Automatic Identification Data Collection (AIDC) and is constantly
making inroads in offering greater flexibility, higher data storage capacities, increased data
collection throughput, greater immediacy and accuracy of data collection, enhanced accuracy
and security, and an ideal data collection platform for a wide range of applications.
A RFID system uses identification devices (tags) or labels (smart labels) attached to the
objects to be identified. The RFID tag is composed of an integrated circuit (IC) embedded in
a thin film medium to store information stored in the memory of the RFID chip. This
information is transmitted to an RFID reader by the means of an antenna circuit embedded in
the RFID inlay using the mechanism of radio frequencies. The data stored normally
represents a unique serial number which is associated to a RFID reader, and used as a
reference to lookup in a host system database more details about the object attached to the
tag.
Of great importance in a RFID tag is its non-volatile memory to retain the tag identifier, the
tag identity information when the tag is not powered. Similarly the tag size benefited from the
tremendous integrated circuits developments, to enable their miniaturisation for incorporating
digital and analogue components in the same physical chip. An approximate size of a RFID
chip is approximately one square millimetre.
RFID enabled "smart labels" have the same reading property as they can be read even if the
label is not in the line of sight of the reader. Reading operations are performed automatically,
and the encoded information can be changed during their lifetime eliminating the need to
remove and re-label items.
21
Smart label printing is a two-step process supported by RFID printers with embedded RFID
encoders and readers. Firstly, it consists of simultaneously printing bar codes, text, and
graphics on the surface of the label. Then the RF tag embedded in the label is encoded: read,
programmed, and verified the RFID data is encoded, copied to and from printed and non-
printed fields in the label templates [28].
The information stored in RFID tags can be written in two ways depending on the RFID tag
type. RFID can be Read-only or Read/Write. Read-only tags have their information recorded
during the manufacturing process and this information cannot be typically modified or erased
during their lifetime. With greater flexibility, intelligence and ease of use, data can be written
and erased on demand at the point of application in read/write tags which provide better
traceability and updated information, and offer advanced features for locking, encryption and
disabling the RFID tag [29]. The interrogator read rate is a determinant performance factor
for applications supporting monitoring of activities involving rapid localisation or
environmental changes.
2.4.1.3 Data-On-Network
Indirect addressing is proposed to overcome the limitation of the additional memory. The
storage strategy consists of storing an internet address (URL) in the tag through which the
data related to the person or goods connected to the tag could be looked up in the database
server and not stored in the tag itself using the backend system services [31].
22
2.4.1.4 RFID Tags Operating Frequency
RFID tags and readers have to be tuned to the same frequency in order to communicate,
similarly to how a radio must be tuned to different frequencies to hear different channels.
There are several different frequencies an RFID system can use.
RFID tags operate at four different frequency ranges: low, high, ultra-high and microwave
frequency. In the electromagnetic spectrum, the higher the frequency range is, the longer the
reading range, the higher speed and more interference from metal. The lower the frequency
range is, the shorter the signal range, the slower the reading, less impact from the metal
presence and less absorption by moisture [32] . The RFID tag frequency can be chosen
depending on its use and the specificity of automatic identification and data capture (AIDC)
applications. A key principle observed in the tag frequency selection is that RFID systems
listen before transmitting data to reduce collisions in the network and general interference
with other WSNs, but adversely reducing the data rate.
The design of RFID specifications for application development aims at searching to find the
optimal tag, taking into account the tag specific attributes that define its limitations
physically, environmentally, and mechanically. These specifications are the translations of
the application RFID requirements. The tag attributes are: frequency range, environment,
mounting surface, attachment method, read range, custom printing encoding and others [33].
A RFID tag can be passive or active or semi-passive or semi-active. The basic difference
between RFID tags resides in the tag’s source of power and way of communication. With no
battery on-board, passive tags use the radio energy transmitted by the RFID reader and
communicate through backscatter, whereas active tags transmit periodically their ID signal
using an on-board battery and a receiver-transmitter. Passive tags communicate with less
interference than active ones. Backscatter is a passive tag communication with the
interrogator based on a radio frequency wave reflected from the tag. Several properties
determine the performance of RFID tags. They are the type of IC used, the read/write
capability, the radio frequency, the power settings, and its deployment environment.
23
RFID tags can incorporate sensors. Semi-passive and semi-active tags have similarly a
battery on board powering a micro-chip, and a receiver. Semi-active tags communicate via a
transmitter whereas semi-passive via backscatter. Semi passive tags are still in the
development stage, their deployment is just beginning. Semi active tags are efficient in noisy
environments [29].
RFID tags became prominent because a direct connection or a line of sight is not needed
during the tag interrogation, and more importantly identification and critical data can be
stored in their non-volatile memory. Active tags supported by a battery can store a large
amount of data, and the tag range and life time can be limited by the amount of data they
store.
Passive tags predominate (several billions in the world) and have a unique identification code
similar to the Unique Product Code (UPC) readable by any UPC scanner. Both work on the
same principle, UPC code items must be scanned individually whereas RFID tags or labels
are read together. Portals with embedded RFID interrogators can be set up to read RFID tags.
Passive tags which do not have an integrated power source and are powered from the signal
carried by the RFID reader, operate in three frequency ranges: low, high and ultra-high. The
higher the tag frequency range is, the higher the amount of information the tag could
modulate, theoretically a maximum of millions of bits of data per second for an ultra-high
frequency [34].
24
Passive tags have three components: the chip, the strap and the antenna. Their performance is
higher when the attachments of the chip to the strap and of the strap to the antenna are more
resistant. It can be affected by the tags shape, in terms of their minimal cylinder diameter
required to encase them: the lower the cylinder diameter reduction, the more resistance is
required for both attachments of the chip and the strap. These tags have a short read range,
and even shorter write range. Unreliable in radio frequency challenging environments, they
have no sensor support [35].
The major development challenges of passive tags are: size reduction, cost lowering, read
range and rate increase, and security improvement. This research focuses mainly on the read
range which is the absolute maximal distance a tag can be read by a RFID interrogator, and
the read rate which is the maximal number of tags that can be read by a RFID interrogator.
Active tags have a built-in power source, and their behaviour can be compared to a beacon.
They use their powered transmitter and receiver for long distance communication and at
higher data rates. They can operate more effectively in less favourable communication
environments than passive tags in which the presence of metal results in communication
interference. Due the existence of multiple level communication interference, regulation has
been put in place to reduce the interference impact in terms of collisions and general
interference with wireless networks and systems. This regulation consisted of requiring that
ultra-frequency RFID systems listen before transmitting (so called listen before talk) not
allowing simultaneous reception and transmission of data, and resulted in the reduction of the
maximum available data rate of these systems [36].
Active tags have a considerable advantage on other tags due their ability to perform tasks
even when a reader is not interrogating the active tag, also to enter a low power sleep mode
using the mechanism of a burst switch. This mechanism, which aims at keeping the receiver
active but reducing its power to nearly zero to listen for the wake-up command, detects the
presence of certain forms of energy using an ultra-low power processor to decode the
presence of energy to check if it resembles the wake-up code.
25
2.4.2.3 Semi-Passive Tags
Semi-passive tags, which are also called battery assisted passive tags, are more fragile and
larger than passive tags. They have a small battery on board that enables them to deliver a
greater reading range and reliability than passive tags. They also present similar functionality
than active tags. The price is in the same order of passive tags and lower than active tags.
However, they are incapable of initiating the data transmission from their location because
they require a reader to interrogate them first [27, 37].
The main challenge of semi-passive tags remains the maximisation of the backscatter
efficiency on the tag side to provide a high sensitivity on the reader side because they do not
actively send RF power back to the RFID reader. This result in an incomplete reading of
sensing values stored in the semi-passive sensor tag [38]. Semi-passive tags are best used in
situations where there is no or just a little metal interference making reading not difficult, and
on-board sensors aimed at tracking.
RFID readers support Automatic Identification Data Collection (AIDC). They are installed at
read points, permanently fixed RFID, or handheld for on-the-spot reading of specific tags.
The tag reading visibility has been extended to include a new type of moving read points to
allow the expansion of RFID applications through additional optimal tag reading capabilities.
This has resulted in the design of mobile wirelessly connected RFID readers that enable the
deployment of RFID read points at virtually every key junction of movement, and the
utilisation of a single mobile RFID reader in different areas throughout the work process.
26
RFID readers can be integrated into WSNs by connecting the RFID node to one of the WSN
nodes to authorize or keep track of people or objects carrying RFID tags [40].
For a read/write tag, data can be written and erased on demand at the point of application.
Since a rewriteable tag can be updated numerous times, its reusability can help to reduce the
number of tags that need to be purchased and add greater flexibility and intelligence to the
application. Additionally, data can be added as the item moves through the supply chain,
providing better traceability and updated information. Advanced features also include
locking, encryption and disabling the RFID tag.
27
2.5 Integrated RFID-WSN
The integration of RFID and WSNs along other advanced technologies is essential in
procuring enhanced and extended sensing and tracking capabilities in a variety of integrated
devices, and extending the range of applications for people and/or object presence detection.
The taxonomy integrated RFID-WSNs include four classes of integration [44], which are
integrating:
Although their design is based on meeting simultaneously sensing and tracking requirements,
their integration aims at using them in:
An interaction mode enabled by the RFID tags and the sensor nodes communication.
An extensive literature has been devoted to the hardware integration of RFID and WSN, and
our research interest includes:
the mode of hardware integration and the resulting data fusion [45],
the allocation of specific tasks to RFID and WSN devices [46], and
The classification of both RFID and WSN devices to create similarity classes of their
deployment attributes [47].
The functional RFID-WSN integration [44] is summarized in Table 2.1 shown below.
WSN 1 2 4,6 5 3
28
2.6 Ad hoc Networks
In this research, ad hoc networks are varied configuration of WSNs formed of sensor nodes
wirelessly connecting fix and mobile smart devices containing a variety of sensors and/or
RFID tags and readers.
29
2.6.1 WSN Typology
Both terminals use distributed multiple radio links in multiple out technology (MIMO) which
is based on a method for multiplexing to increase wireless bandwidth and range by
multiplying the capacity of a radio link to exploit multipath propagation, and has become an
essential element of wireless communication standards used in Wi-Fi, 3G and 4G. This
technology relies on the use of multiple, smart transmitters and receivers to support three
activities: pre-coding, spatial multiplexing (SM), and diversity coding [51].
WSNs major research challenges aim at relaxing existing constraints that enable increasing of
the network performance, lifetime and interconnectivity with other networks, and constantly
improving the networks self-deployment. These challenges depend on whether WSNs are
deployed on land, underground, and underwater. Depending on the environment, a sensor
network faces different challenges and constraints [49].
When deployed, autonomous sensor nodes wirelessly in WSNs face communication link
failures, memory and computational constraints, and limited energy. These issues may affect
WSNs key performance criteria such as optimal deployment, node localization, clustering,
data aggregation, and in-network processing [52].
30
few milliseconds only when sensing, working in a virtual sleeping mode during the node
listening period. Of similar importance in increasing the performance of WSNs is the use of
multiple and alternative communication paths, in conjunction with reliable and robust
solutions for message routing and flooding of the network.
The support to WSN sensor nodes receives increasing research attention with the aim to
increase their performance by pre-programming them for a number of roles with associated
tasks. Their roles can change and be selected following event conditions at runtime for a
better control resource allocation, and decrease resource competition within the network
when fulfilling allocated services.
The functional use of WSNs is an essential requirement in their design. The sensor nodes
lifetime can be extended considerably when their configuration is based on operating the
sensors in a very low duty cycle using duplicate nodes, synchronised sensing within the
sensor nodes clusters and sleeping using dual switching mode on and off. Of great interest in
the energy saving challenge is the use of the microcontroller and transceiver sleeping modes.
31
The activation of these two components is energy consuming for the execution of a long
operation of WSNs. In WSN applications with user-specified delay requirements, the
dynamic tuning of the sensor nodes duty cycle is systematically activated to achieve the
desired end-to-end delay guarantees [56]. More importantly, the use of intelligent middleware
aims at using less consequential resources to serve additional service requests. The focus is
mainly on monitoring the energy level and adapting system behaviour to select the available
energy levels that allows the application to achieve its targets [57].
A planning mechanism based on the use of a resource engine can interactively calculate
energy levels and reserve the required consequential resources needed to successfully support
each application service request allocated in the network. This mechanism aims at developing
resource distribution strategies needed to provide efficient system adaptation of the network,
and effectively achieve the desired trade-off between end-to-end delay and energy
conservation [20].
The future internetworking aims at interconnecting all physical worlds, creating autonomous
interaction based on a full interoperability with the commodity internet using seamless
interconnection between remote WSNs with web servers and IP Networks to create hybrid
networks [58]. This seamless interconnection can be done by using a dynamic service in the
form of a WSN middleware to enable IP-based hosts considered as external agents, to gather
sensed data from one or more remote WSNs through application-layer gateways [59]. IP-
based hosts can access and manipulate IP-enabled WSNs which are part of the WSN, and
extract data from remote dynamic services enabled in WSNs.
The deployment of autonomous sensor nodes before their aggregation into clusters is
performed either in an ad hoc fashion or with careful planning and engineering. When
32
deployed outdoor in an ad hoc way, they autonomously organize themselves into a network
by wirelessly connecting with each other and to a base station via gateways, similarly when
they are deployed indoor with planning their exact locations.
A different variety of software are embedded in enhanced hardware sensors to compose what
is so-called software sensors increasing the sensing intelligence required by the sensing
accuracy. This intelligence can be extended outside hardware sensors in the form of
intelligent agent software in what so-called virtual sensors.
WSNs have been widely and extensively used primarily for research purposes to demonstrate
the development of new technologies and the exploration of remaining limitations of existing
ones. Their first application area is the military surveillance for the tracking of enemy forces
[60, 61] to study the complexity of robust tracking of people and vehicles moving in the
proximity of a WSN and the requirements of in-network processing of the sensed data. The
first significant application deployment of a WSN occurred in 2002 [62] to monitor the
environmental conditions around the nests of storm petrels on a small island with the aim of
examining the nesting behaviour of birds.
There is a wide broad range of WSN applications. Their feasibility has been demonstrated by
either a prototype or a real-world deployment. The WSN technology is used for many
purposes. Most of WSN applications are aimed at pure data measurement. The collected data
is relayed to a server for processing. The type of collected data can differ depending on the
application purpose: Low-Rate Data Collection, High-Rate Data Collection and On-Demand
Data Collection. The difference between these three types resides in the involvement of the
33
user who triggers the collection of data on-demand, whereas different types of homogeneous
devices with different configuration of sensors generate the low or high rate collected data
[63].
Several types of WSN applications have been developed in the following domains:
The number of WSNs deployed has tremendously increased due to the efforts consented due
to maturing software infrastructures (e.g., TinyOS, TinyDB), the increasing robustness of
networking protocols and support systems [64]. Their use has been extended to cover several
domains, a myriad of applications, including structural monitoring [65], cold chain
management [66], precision agriculture [67], emergency response [68], and health care [69,
70].
The tremendous implementation success of WSNs in a wide range of domains has created the
need for their integration or combined use with other technologies to support unconventional
and real-world applications involving smart devices such as RFID [71, 72], mobile robots,
and smart phones and cameras. WSNs perform on-node processing and event detection or it
even classifies the observed data within the network.
34
2.6.5.1 WSN Sensor Nodes Clustering
Spatial clustering consists of identifying a set of cluster representatives showing that they
follow the same data trends using, for example, the time series analysis, to reduce data
acquisition and transmission times, and consumption power [74, 75] in a sensor network
configuration with limited storage and communication capabilities, and power resources,
mainly in the case of generalised passive nodes use. Other considerations including sensor
nodes deployment and communication characteristics are also used for spatial clustering.
The sensor nodes clustering can be done off-line at the base station or inside the network.
Off-line clustering consists of the sensor node data transmission to the central base station
and can lead to heavy traffic in the network, augmenting the processing time. In-network
clustering consists of grouping sensor nodes in a network on the basis of their data
characteristics by using data regression analysis for each node and configuring a node model
called the communication graph associated to a distance matrix measuring the distance
between the graph nodes [76]. Different techniques using specific algorithms have been
suggested to solve this complex sensor nodes clustering problem, and significant changes in
the nodes data will result in the reconfiguration of the communication graph, called the
sensor nodes clustering maintenance.
a) Confidentiality
Sensor nodes use symmetric key encryption to provide confidentiality [78]. Strong
authentication and encryption mechanisms are used to protect data confidentiality in all the
network levels and outside to ensure that data is accessed by the intended recipient only, and
free from malicious attacks, referred in the literature as untrusted third party.
The risk of confidentiality violations can be increased by sensor nodes which can mislead the
base station by gaining access to a sensor node keying material for inserting modified or
inaccurate data into the network when forwarding and aggregating the data. These violations
35
also occur when sensor nodes are corrupted affecting the data aggregation during the in-
network processing. Data aggregation consists of collecting data generated by sensor nodes at
each intermediate node en route to the sink in order to reduce the volume of messages
transmitted in the network. These sensor nodes can also fail due to random and non-malicious
causes which include sensor malfunctioning, battery exhaustion, and device disconnection
from the network or inability to properly execute the protocol due to hardware or software
failures.
The protection against these violations is developed in mechanisms that provide both
confidentiality and integrity of the aggregated data using the data from the same sensor nodes
cluster of the compromised node using the concept of delayed aggregation and peer
monitoring based on local cluster interaction [79].
b) Data Integrity
Data integrity ensures the integrity of the process of transmitting and receiving data inside the
network, preserving the original formats and sequences and not allowing the data packets
modifications, alteration, disruption and absorption by network attackers. In WSNs, Routing
protocols ensure the routing maintenance by providing reliable multi-hop communications for
different network configurations.
Data integrity is assured by effective routing and networking protocols which are
characterised in WSNs by the migration of early flooding-based and hierarchical protocols
over the last two decades to geographic and self-organizing coordinate-based routing
solutions [80]. The networking protocols are needed to support the implementation of various
network control and management functions such as synchronization, node localization, and
network security. The first category of protocols is a blind technique which results in
duplicated packets that might keep circulating in the network, causing an implosion or
overlap problem [81]. These duplicated packets result from the same sensed values generated
successively by the same sensor node or simultaneously by different ones located in the same
proximity. Their elimination can be obtained by using the gossiping technique [82] which
consists of selecting only one copy of each packet. The gossiping technique tackles the
network implosion, and the delay required for the packet selection is reduced using in-
network processing capabilities available with the active node of the sensor nodes cluster.
36
c) Service Availability
Service availability includes the proper working of all homogeneous and heterogeneous
devices, non-interrupted communications and in-network processing, and services, data and
network resources that legitimate users can access when requested. This availability is
characterised by no delay or interference which are inherent to three major types of security
threats observed mainly on multi-hop WSNs: passive, active and Denial-of-Service (DoS)
attacks [83].
Passive attacks which result in simple stealing of information over the wireless medium and
compromise data confidentiality, are unnoticeable and do not harm the network. Their
detection is very difficult. Of further incidences on the network, active attacks which are
permissive by non-elaborate routing protocols, modify, temper and alter the packets, affecting
the integrity of data in the network. DoS attacks target availability of services to the users, by
preventing a sensor node from sending traffic or by preventing the communication between
the networks.
37
controlled by actuators defined as anything that creates the agent outputs. They may cause
changes in the environment, or move the agent to a new location in the environment. They
can be used just to generate information or knowledge in the multi-agent decision support
system.
Agents are grouped in a loosely coupled network, and work together to support decision
making processes requiring problem solving capabilities and knowledge elaborated beyond
the different entities themselves. This network configuration is called a multi-agent system
(MAS) based on the use of intelligent techniques and models modelled in the form of
different agents following an agent typology. The agent models are used in hybrid intelligent
decision systems to define the building blocks of the system to be loosely integrated in a
multi-agent distributed architecture. These are used to support cooperation, coordination,
negotiation, and the like between them [88].
38
Agents can be static (static agents) or moving around the network (mobile agents). They can
be proactive, initiating interaction with other agents or active, just reacting to solicitation
from other agents. Deliberate agents can have a reasoning model and engage in planning and
negotiation while coordinating with other agents, whereas reactive agents use a pre-set
response when they operate and interact with each other. Agents which play more than one
role in the multi-agent system are called hybrid agents.
Depending on their abilities, agents can be: collaborative agents (collaborative and
autonomous), collaborative learning agents (collaborative and learning), interface
agents (autonomous and learning) and truly smart agents (collaborative, learning and
autonomous) [90] . Other agent constructs can be combined to determine specific agent types,
such as static deliberative collaborative agents, mobile reactive collaborative agents, static
deliberative interface agents, mobile reactive interface agents, etc. Agents can fall into one of
the following categories: table driven agents, simple reflex agents, model-based reflex agents,
goal-based agents, utility-based agents and learning agents.
In the present research work, the typology agent only is limited to the following types:
collaborative agents, interface agents, mobile agents, information/internet agents, reactive
agents, hybrid agents and smart agents.
Depending on whether an agent can sense the complete state of the world or not, its
environment is accessible or inaccessible. The states of changes of the environment are
completely determined by the agent current state in a deterministic environment, whereas in a
non-deterministic environment, it is irrelevant to the state of changes of the environment.
Depending on whether the agent’s interactions with the environment are limited or not in
terms of a limited number of distinct, clearly defined precepts and actions, its environment is
discrete or continuous. It is dynamic by opposition to static, when the state of the
environment can change while the agent deliberates.
39
The agent environment is episodic by opposition to non-episodic (sequential) when the
agent’s interaction sequences with its environment (or “episodes”) are independent, and
decisions do not depend on previous agent decisions/actions. A single agent operates by itself
in an environment. Of great significance in the definition of an intelligent agent, is the
definition of its task environment in terms of specifying its performance measure,
environment, actuators and sensors. Agents must observe their environment in order to
operate, even when the observable environment might be limited by the current spatial, social
or communication context of the agent [92] .
A basic behaviour results from an initial learning by an agent of its environment and is
generated as an internal world-model reinforced by the different agent states occurring from
its initial self-configuring.
40
2.7.2.5 The Agent Interaction
Of great importance in the agent based modelling is the integration of change of
environmental conditions about other agents interacting together, the identification of desired
behaviours associated with positive impact on the purpose and goal meeting, the integration
or suppression of a specific behaviour without using pre-configured rules, and how a
behaviour generates an internal world- model, and links to the environments via the agent’s
sensors and actuators to refine its internal model. The agent interactions analysis provides
valuable information to determine successive agent behaviours.
The interaction between agents requires the specification of actor’s interaction, the interaction
context and nature, and the environmental resources needed for its support. This interaction
between agents is a collaboration which consists of synchronizing tasks and managing
conflicts when possible. It results in agent group formation and task coordination which are
used as a means of adapting better to the dynamism of shared environments.
41
interacting with. The most common arguments of focus in this research are threats, examples
of similar situations and appeals to self-interest.
Although trust is a critical prerequisite of any agreement process, the lack of evidence in
terms of difficulty to establish causality when conveyed in the agent arguments, accentuates
the decision complexity. The argumentation in negotiation must be based on reliable and
strong evidence to support the arguments acceptance which results in a valid arbitration of
current behaviours.
The dynamics of the continuous complex interaction between agents can generate
intelligence when agent emergent properties arise from the interaction of their different
behaviours at both the individual (single agent) and group level (multi-agent system).
42
generating its own knowledge base, validating some multi-agent system specifications and
capturing causality among messages sent by agents. This causality which relates to a concrete
event, is the cause associated to the event occurrence, and results in semantic information
explaining what actions are performed by agents and how their behaviours are modified by
these actions [101] .
The process of agent coordination, which aims at adapting to the environment, consists of
two major tasks: dependencies detection, and selecting the appropriate coordination actions
to apply [104] . The emergence in the coordination of new agents and/or the behaviour
changes in existing agents require to accommodate additional individual plans to incorporate
new dependencies, ensuring the validity of the behaviour arbitration process. These
dependencies are between the agent’s own prospective actions and potential actions of other
agents, exploring the coordination synergy of extended emergent properties to ensure that all
the agent goals are achieved, and this result in a global plan detailing decisions and actions
[105, 106]. The type of this plan will differ, depending on the nature of the
agreement/disagreement between the agents present in the coordination. This plan is called a
joint plan if an agreement between agents has been reached and a multi-plan otherwise.
Of great significance in the evaluation of this plan is the agent joint goals as related to the
multi-agent system functionality represented using the dependency model of coordination
showing the agents dependency requirements, including the complexity of patterns of
43
interactions among agents pursuing partially conflicting goals. These agents can choose to
play or abandon certain roles within the coordination following the appropriate the sequence
of plan actions that best achieve their goals ensuring the respect of the given regulations
within the system [107].
Agent coordination is based on four tasks: the use of models of coalition formation to
determine when and with whom to interact for the achievement of some common individual
and global goals, the task re-distribution between agents, the integration of the results to
measure the agent's performance, and the benefits distribution of synergies resulting from this
cooperation [108] . It is used for coordinating open multi-agent systems, enabling the
accommodation of heterogeneous agents ensuring a secure control of access rights to
eliminate or reduce security risks [109]. The visual representation shows only the agents
coordination [110].
Multi-agent modelling starts with the specification of the agent functionality by defining their
responsibility and capabilities in terms of operations, rules and planning; taking into account
the organisation that describes the framework where agents, resources, tasks and goals
coexist. The functionality of the organization is defined by its purpose and tasks decomposed
into operations and distributed between the different agents. Operations are structured into
tasks and plans, and associated to goals and consequences resulting from the performance of
these tasks. Satisfaction and failure relationships are defined to represent the positive or
negative influence of the goals by the execution of the tasks by the agents.
44
emerges as a result of interactions of the agent behaviours acting individually or interacting
together.
ABM consists of using computational models to simulate both actions and interactions
involving autonomous agents performing individually or collectively individual or group
tasks with a view to identify the changes on the system and derive plausible explanations to
their behaviour. In this research, only the agent-based modelling patterns considered are the
model architecture, the agent synchronisation and the connection and communication
between agents.
Of great importance in the agent-based modelling is the generalisation of the classic logical
implication which has shown some limitations due to the complexity of the context-
dependent functions. The suggested solution remains that the expert has to decide rules from
his knowledge, and the operators are chosen that work well in every context invoking these
context-dependent functions.
Social-reasoning mechanisms based on agents exchange values are incorporated for the
integration of autonomous characters displaying social behaviours to represent intelligent
45
virtual humans playing the social role in real life applications [113]. The interaction between
virtual actors mostly involves facing the complexity of animating pure social behaviours to
manifest all the cognitive agent properties and create fully automated services.
46
2.7.6.1 Knowledge Representation
The difficulty of enabling autonomous agents to allocate tasks, is accentuated by
computational agents participating in social and coordination processes without a clear
definition and representation of their roles and behaviour, in addition to the presence in the
interaction, of agents with limited modelling abilities affecting the coherence and consistency
of collaborative problem solving system [117] . This poses the problem of their knowledge
representation and reasoning in terms of social choice rules, complexity of reasoning with
such representations, and the handling of preferences. The coordination between these agents
is reflected by the agreement between them, and can be seen as “the management of
dependencies between organisational activities” [118] , or the rules of governing the
interaction between agents aimed at the agents convergence on interaction patterns which
deal with solving the dependency detection and decision tasks.
Of great significance in the agent modelling organisational model is the gulf existing between
the use of the design-time coordination mechanism for mainly closed distributed problem-
solving systems and the adaptive flexible run-time coordination mechanism for dynamic
complex problem-solving systems deployed in open environments [104] .
Research into agent-based modelling has produced a choice of methodologies, design tools
and platforms for deploying autonomous agents in open complex environments. Several
agent-oriented software development methodologies have been elaborated in the last past
decade, emphasizing more on the description of agent models and their usage during the
47
development of multi-agent systems. The most used among these methodologies are Mulan,
Gaia, MaSE and Prometheus, Tropos, PAOSE [121] .
These methodologies which are process centred and object oriented use the same common
concepts in agent-based modelling. These are cases, system structure or organisations
diagrams, role models, interaction diagrams and protocols, models of internal events, data
structures and decision making capabilities. Their openness to deal with designing open
complex multi-agent systems has required the development of more than hundred agent-
based modelling software, addressing the specificity of different domain of system use,
programming language and operating system [122]. Although there is a proliferation of
modelling software, tools support is not sufficiently covering the agent modelling and
cooperation requirements [123].
Of great significance in understanding how to control a system is the capture of the properties
and behaviour of individuals that determine the properties of the systems they compose. The
system control becomes a necessity due to the different and adaptive nature of individuals,
48
and IBM enables the study of the relationship between their adaptive behaviour and emergent
properties. They can provide an explicit basis for supporting the modelling of decisions and
reducing the need for ad hoc decision modelling.
The concept of cooperative control covers three essential structural elements of global
optimised systems: multi-agent control, distributed systems and networked control. The
access to the global information collected by all agents during their interaction in a
distributed multi-agent system configuration poses the problem of information distribution
and its control within the network between the different agents and outside between the
different users. Any information gathered by a single agent does not reach other agents
without explicit communication. The decision interdependency and aggregation within the
system which require agent's collaboration and cooperation for a real-time adaptation when
self-organising and deploying given their spatial distribution, outlines the system control
complexity.
49
mainly those involving the mobile human surveillance in which the assignment of a route to
alarms is a complex problem.
The importance of hybrid intelligent decision support systems as an essential element of the
legacy system network, is emphasized by the need to support the networks tasking between
the agent network, the logistics network and the communication network to improve the
response time to incidents by reducing the communication bottlenecks [129]. The
incorporation of agent technology in conceptual frameworks aims at examining the functional
requirements for elaborating an effective and reliable self-organizing incident monitoring and
communication support.
Intelligent software agents undertake many of the operations performed by human users as
well as a multitude of other tasks representing the MAS activity. Of greater complexity in the
development of large scale MAS based on the use of intelligent software agents, is how to
support the facilitation of the communication among agents of different types, in terms of
modelling the different types of agents: interface agents, task agents, information agents and
middle agents. This facilitation process poses the complex problem of heterogeneity which is
derived from differences of communications, coordination, environmental, functional,
security, semantic and system and hardware [130] . These differences which are expressed in
terms of broad requirements translated into specific implementations are modelled into
intelligent software agents composing the agent architecture of the MAS infrastructure.
The study of behaviour of humans involved in group activities and situations has shown that
humans behave differently in crowded scenarios. The computerised representative of human
agents with no decision making autonomy form composite agents, and are used for
generating multiple agents interactions and simulating crowd movement patterns, validating
the human-like behaviours generated by these agents [131] . Each composite agent consists of
a basic agent that is associated with one or more proxy agents.
Hybrid human agents are active hybrid agent-user systems forming multi-agents systems that
can achieve a highly challenging and interactive cooperation, modelling a variety of emergent
behaviours for agent-based crowd simulation as group models occurring in various domains
50
like emergency escape or response, or real-time crowd systems for video games [132] . The
decision making process is created automatically by executing computerized instructions.
51
interfaces, and advanced flexibility and customization to easily add, modify or remove
services on request, with very limited impact from the programming environment.
Cloud computing architectures is the ultimate integration level providing simultaneously in
one integrated novel design framework infrastructure (IaaS), platform (PaaS) and software
(SaaS) as a service, adding composite services and applications as interaction blocks between
users. All available services are stored in the Web Services Architecture Model (WSAM), a
sort of model base supported by a Web Services Description Language (WSDL) establishing
the logical link between applications and services.
Of great complexity in the implementation of these integrated service-oriented multi-agent
architectures is the incompatibility between the agent's platforms. The advocated solution is
either the distribution of services and applications in the agent infrastructure by modelling the
functionalities of the agents and the systems as services and applications invoked by the
agents, or the organisation of the communication between the different agent-based models of
the platform [137] .
Web-based groupware systems integrate both the user’s physical and collaborative context,
and are built using object-oriented models. These models aim at personalising information
content which consists of enabling the filtering of context-aware profiles to match the users’
context and the use of progressive access models to organize the selected information
corresponding to the available content matching the profile. Wiki systems such as XWiki and
MediaWiki, can be considered as a new generation of Web-based groupware systems
enabling the combination of mobile technologies together with some of the Web principles
that allow mobile devices to share data and contextualized information.
52
2.8.3.3 Mobile Devices Adaptation
The use of mobile devices in Web-based groupware systems in the context of people access
control system, poses the problem of the users’ specificity in terms of personal criteria such
as user’s personal characteristics, background, culture, interests and preferences, and the
context of their use that includes the location, specific situations and environmental
conditions [140] . Of great significance in the research and development effort to enhance the
access to Web-based collaborative systems, is the adaptation mechanisms and principles
aimed at reducing the limitation of the mobile devices in terms of their battery lifetime,
screen size, interoperability and intermittent network connections, to display or introduce the
appropriate informational content. The improvement of the adaptation process consists of
implementing a context-based filtering process that proposes to adapt the information
delivered to mobile users by filtering it according to the current user’s context and to the
user’s preferences for this context. Allowing a direct participation of the users into the
adaptation process to express individual user profiles, a progressive refinement of the profiles
description, and the simplification and reuse of profiles, results in additional requirements
about the delivery of services, data and presentation [139]. Taking into account the interface
requirements of mobile devices in use in the MAS, these additional requirements are
translated into specifications incorporated in the design of interface agent's actions.
Self-adaptive systems are aimed to meet their higher-level objectives which are to be
deployed on a large scale, able to function without any or very little human intervention, and
regulating emerging requirements and orchestrating emergent properties.
53
2.8.4 Web-Based Learning Support Systems
A tremendous effort has been consented by researchers and practitioners in the domain of
web-based support systems to enhance the design and increase the usability of personalized
or collaborated learning support in dynamic and heterogeneous learning environments. Of
great importance in the design of these support systems is the implementation of learning
mechanisms to increase the learning intelligence by enhancing the interactive interrogation of
learners and providing them with a cognitive support for enquiry-based learning (EBL) which
includes problem based learning [142] and uses knowledge as a cognitive structure of a
person, and learning as an active process of elaborating knowledge, stimulating learners to
share the knowledge acquisition [143].
The new configuration of web-based learning support systems is based on the use of
Adaptive Web technologies to integrate personalization and collaboration in a web-based
learning management environment. These systems support learning as an interactive,
dynamic, and active feedback process in which the learning facilitator replaces the direct
interaction between the teacher and students. They are integrated adaptive and intelligent
learning management systems (LMS) that support a wide range of learning facilitator’s roles
using different learning models [144] . They aim to distribute information and deliver
knowledge to an increasingly wide and diverse audience, using personalized or group pre-
designed learning paths which can be modified during the course of learning, depending on
individual learning progression. Learning requirements and learning styles are integrated in
the definition and support of personalized learning needs, taking into account the key
characteristics of learning support: complexity, individuality and adaptability, interaction, and
activity and assessment.
The complex and dynamic nature of varied innovative and generic web-based services
supporting web-based collaborative self-assembling and self-adaptive applications using
advanced information technologies increases the complexity of service management that
attracted various related research and development efforts offering e-solutions for
54
globalization, automation, and self-service activity developments. More elaborate e-
environments based on the use of network organisational models are continuously proposed
to support the automation of the cooperation over the web of service providers and service
consumers. This aims at providing a cost-effective, rapid and reliable new product and
application services to instantly interchange documents and information, and share
collaborative decision making processes with many different organisational entities [146] .
The main services included in the service management are: desktop management (DASH),
network management (NetMan), storage management (SMI), systems management
(SMASH) and virtualization management (VMAN). The storage management includes
services supporting the plug-ins of a wide range of domain knowledge sources, its related
services and a context-aware capability to dynamically identify the relevant services
according to the current focus of attention. The domain related services which can emerge or
cease, are structured and deployed using a domain service bus (DSB) aimed at dealing with
the service domain users requests.
The prominence gained in recent years by wireless communication applications with the
growing technology of ad hoc networks and agent-based architectures has led to the design of
innovative self-organising intelligent decision support systems to effectively support the
deployment and control of integrated RFID – WSNs. These support systems which are based
on self-organized agent based architectures supported by collaborative decision mechanisms,
provide an efficient way of choosing the network configuration and control policies and
scenarios to reduce both computational overhead and energy consumption. They need to be
55
explicit as to what the underlying decision mechanisms are, and what the functional
consequences of the decisions are.
The decision making support is based on decision mechanisms enabling individual decisions
by individual members needing to be made jointly with other group members to reach a
consensus and so avoiding conflict of interest between group members. These members can
be people, smart devices and intelligent agents. Following their local (individual) behavioural
rules, results in organized behaviour by the whole multi-agent system without the need for
global control from outside the system involving external decision makers.
Decision making activities in complex environments are based on real time group decisions
requiring a decision consensus needed to support the agent argumentation and negotiation.
The advent and generalisation of web-based services decision support has enabled inter-
organizational collaborative decision support systems and integrated group
processes supporting a wide variety of real-time complex decision making activities.
Individuals, smart devices and elements of behaviours and system control modelled as agents,
often make decisions in small groups or in large organizational networks interacting together.
The decision consensus is obtained taking into account the decision independence of some
members with the group decisions which include combined decisions, aimed at tasks
allocation that might require a new consensus depending on the tasks outcomes. These tasks
result from the decomposition of the activity covered by WSNs in addition to those required
by the deployment, configuration and control of the network.
Agents allocate to themselves and others tasks that might involve other agents, and if
combined decisions involve significant conflicts of interest, agents can struggle for control
56
[148] . Consensus group decisions aimed at agreement between several agents, require
different cohesion and aggregation mechanisms from individual-based combined decisions.
They can be examined to the extent to which they involve conflicts of interest between group
agents and whether they involve either local or global agent communications [149] .
Conflicts of interest can affect the fitness consequences of the decisions. Agent groups with
local communication rely on self-organizing rules generated from their own behaviours,
whereas more-complex negotiating behaviour can occur only in agent groups that are small
enough to enable global communication. This second aspect applies only to human agents
and their limited local communications.
Of great significance in reducing the conflicts of interest to reach a decision consensus is the
full understanding of who makes the decisions, what the underlying decision mechanisms are,
and what the decision functions are. This results in defining whether group decisions are
shared or unshared.
2.9.3 Real time data capture, predictions and decision making support
Real time data capture, predictions and decision making within the surveillance domain for
hazard identification and mitigation require the use of a variety of decision support systems
for to support the domain fusion functions that include data processing and mining,
knowledge discovery and extraction. The main aspects of decision making process used in
fusion functions are detailed in Chapter 3.
The proposed research aims at providing a novel integrated solution for the several problems
identified in recent works, studies and developments.
The studies examined in this work have suggested a network typology based on the
distinction of homogeneous and heterogeneous devices around the sensor node
components that will confer an active or passive role to the node, depending on its
configured components [59]. In the proposed research sensor node devices configured
primarily to perform different functions of sensing, localizing and tracking, as listed
in Table 2.1 are to be considered generic smart homogeneous devices. Their
57
deployment and configuration can be enhanced by the use of heterogeneous devices
such as RFID readers, sprinklers, people counters, etc...
The examined literature has not covered the design of generic homogeneous sensor
node devices which is considered in this work as an essential feature in the definition
of flexible adaptive configurable WSN architectures.
WSN-RFID integration:
Although a hardware solution has been proposed to adapt the integrated WSN-
RFID in a context aware mode for new extended sensing capabilities by
integrating RFID tags and readers WSN nodes [153], the study has not addressed
the devices interoperability, not enabling to support the dynamic smart nodes with
automatic reconfiguration information generated by dynamic control requirements
imposed by self-turning distributed multi-agent systems. Of great interest in the
study of the RFID-WSNs integration problems, is the needed validation of
integrated configurations when integration by multi-sensor heterogeneous data
fusion is allowed.
The recent development of different types of networks that include ad hoc networks
and WSN has influenced the applications design and distribution [154]. The proposed
research focuses on a novel approach for data, information and knowledge fusion.
Multi-sensors heterogeneous data fusion that includes data cleaning, compression and
aggregation based on the cluster based approach, and effectively supported by data
58
warehousing, is an upstream step to the process of information and knowledge fusion
that contributes to the increase of both the WSN and information fusion system
performance.
This novel approach supports the strategy of decoupling the data capture and
processing from information and knowledge fusion.
2.11 Summary
This research focuses on hybrid intelligent decision systems based on MAS architecture to
support the configuration of integrated RFID and wireless sensor networks (WSNs). Its
interests reside in examining severe constraints that are imposed on the sensing, storage,
processing, and communication features of the sensor nodes, and designing a flexible and
adaptive WSN configuration solution to enhance their deployment and increase their
performance using hybrid intelligent web-based support systems. This network architecture is
considered for the support of distributed detection in the domain context of civil defense.
The reconfiguration of WSNs is needed when the sensor nodes may become faulty due to
improper hardware functioning and/or lack of energy supply (dead or low battery power).
Their deployment covers a wide variety of environmental settings, ranging from harsh and
remote environments to residential buildings and clinical units, supporting a large range of
applications. These applications include, among others, military and civilian surveillance,
tracking systems, environmental and structural monitoring of home and building automation,
agriculture and industrial settings, and health care.
The literature review presented in this chapter is an overview of the different concepts
involved in the research conceptual framework developed in this research. The influential
elements and considerations of self-organized multi-agent based distributed architectures for
59
the deployment and configuration of ad hoc WSNs have been exposed in the line of
technological and organizational developments, underlining their importance.
60
Chapter 3: The Theory of Hybrid Intelligent Decision Systems
3.1 Introduction
The emergence of advanced internet and wireless technologies has resulted in real-time
remote monitoring and system control becoming an essential need for organizations to timely
capture real-time heterogeneous data. This requires accessing accurate information and
making improved real-time decisions to meet strategic objectives and respond appropriately
to changes in the environment. But increasingly, these organisations are facing tremendous
challenges in their quest for up-to-date and accurate context-aware information. This requires
real-time integration of reporting and analytics with the aim of developing intelligent
capabilities for handling large amounts of complex and non-traditional data, making better
decisions and taking meaningful control corrective actions at the right time.
These integrated technologies are pushing toward the full continuous web monitoring of
environmental conditions and facilities through the increasing use of intelligent solutions and
ubiquitous computer and internet. A smart design environment is needed to help build,
deploy, operate, integrate, aggregate, and use a spectrum of services delivered over the web,
processing data collected from wirelessly connected heterogeneous devices, in an effort to
cope with increasing market changing requirements induced by globalisation, digitalisation
and changes in safety and security legislation. This design environment is based on the use of
an integrated conceptual framework for intelligent support systems.
61
Figure 3.1: Integrated Conceptual Framework for Web-based Hybrid Intelligent Decision
Support Systems 1
This integrated conceptual framework develops a specific interest on how to apply the most
adapted optimisation model automatically and without the need for expert interference, and
integrate generic real time collaborative decision making processes with the following
particular characteristics illustrated in Figure 3.2:
62
the kinds of unstructured or semi-structured data generated by heterogeneous entities and
resulting in tangled clumps of messy data culled from multiple sources,
the horizontal data integration between the network control and the decision making
levels at the different management support levels, taking into account the four data
dimensions which are volume, variety, velocity and relevance, and
the kinds of unstructured or semi-structured decisions required for the adaptation of
agents behaviours representing the organisational units and their associated patterns of
deployment and control.
The tasks resulting from decision activities are differently structured depending on the
decision characteristics and the management level decision requirements in terms of
uncertainty, aggregation, prediction and validity. Ranging from structured to ill structured,
the decision tasks result in programmed to non-programmed decisions. The use of intelligent
support systems aims at reducing the level of non-programmed decisions.
The analysis of decision tasks in the domain of real-time remote sensing, monitoring and
tracking, and system control shows a growing need to detect and react to environmental
events as they happen in real life. This reduces the length of the batch window to a few
seconds, or even less, and requires the use of web-agent based enabled decision support
systems as the fastest and easiest way to bring real-time data onto the World Wide Web, and
use appropriate services to integrate data and decision making activities. The validation of
detected events requires the tracking of all changes leading to these events for auditing and
analysis purposes to trigger appropriate responses using real time decision mechanisms
mainly in large network systems.
The keeping of data in synchronisation across the networks becomes an important increasing
need to ensure that the most recent status of all system agents, associated to devices and
people, are available within the distributed processed applications accessible to end-users in
the form of web-services. Software frameworks are proposed to support these applications
across relatively different network configurations that enable the access to real-time
knowledge discovery and extraction, and enhance the quality and timeliness of decision
making support due to the widespread use of the Internet and web-based platforms, the wider
accessibility of smart devices, wireless sensor technology and mobile communications
integrated in a dynamic ubiquitous environment [150].
63
3.3 Decision Support Systems
The integration of database and modelling capabilities have benefited from technological
innovations enabling continuously far more powerful DSS functionality. DSS are computer
technology solutions that can be used to support complex decision making and problem
solving, improving the decision making process support and decision effectiveness and
efficiency. They support some or several phases of a decision making process, at the
individual, team, organisational or inter-organisational levels. They were initially developed
to support individual decision-makers, but later DSS advanced technologies were developed
to support workgroups or teams, especially collaborative support systems and virtual teams
[151].
The advent of web-based services and the growing technological development of mobile
tools, mobile e-services, and wireless internet protocols has extended the inter-organizational
decision support systems used for evaluating strategies for device deployment, and agents
and services composition. In these intelligent systems designed to support different decision
activities in distributed collaborative environments in terms of time, spatial distribution and
individual and group interaction, the integration of the different aspects of the decision
problem and the varied group compositions requires the study of a set of characteristic
requirements needed to organise the different phases and reduce the span of the decision-
making process.
64
3.3.2 Decision Making Process Characteristics
Similarly, decision making processes can be ill-structured. Their support requirements have
been specified in a generic design framework aimed at iteratively integrating the different
steps of the solving process to reach a satisfying decision. These steps are intelligence for the
search of problems, design for the development of alternatives and choice for the alternative
analysis and implementation of the selected alternative [154], as illustrated in Figure 3.3.
The decision making process is structured into three iterative steps (Design, Intelligence &
Choice) which can be overlapping and requiring individually one or several global processes
to serve their purpose.
A decision making process can be based on unstructured data, and the lack of structure in
data requires time and energy for its compilation using data analysis mechanisms. DSS aim at
reducing this lack of structure, and to create a relational structure, a sort of data intelligence
that enables its integration into the data model, and constitutes an important aspect in the
problem solving formulation and the development of alternatives. Turning unstructured data
65
into structured or semi-structured data enhances modelling capabilities needed to support the
search and development of decision alternatives and their comparison using multi-criteria
attributes to trade-off their possible outcomes.
Problem analysis and model development are two essential elements of problem solving and
rational decision making. In this context, decision support requirements include the
facilitation of the creation of decision models aimed at searching or generating alternatives,
and comparing them. This facilitation is based on an iterative process that can be overlapping
in the sense that the search or analysis of alternatives can induce a problem reformulation or
different design actions. The steps of the decision making process overlap and blend together
in the effort of searching satisfying alternatives, requiring often frequent looping back to
earlier stages to incorporate what has been learned about the problem formulation and its
solving, enhancing its perspectives for a much broader form of analysis and reasoning,
relying more on intelligent data and modelling in an integrated framework.
DSS combine sophisticated database, knowledge and model management capabilities to meet
the above mentioned facilitation requirements. They enable access to internal or web-based
external data and information, and knowledge. They rely on internal and external powerful
modelling functions available locally or accessed via a web-based model management
system. They are based on advanced powerful user interface supporting interactive group
interactions, knowledge and model representation, decision maker’s rapid information
dissemination, queries, reporting, and graphing functions. They provide decision makers with
smart support, enabling them timely access to a panoply of decision making tools via services
through mobile and smart wireless devices as much as through their desktop computers.
These services include internal and web-based services for data warehousing, on-line
analytical processing, data mining, decision tools, collaborative decision support systems,
virtual teams, knowledge management, model management and optimization-based DSS
models, and active decision support for distributed multi-agent systems [156].
66
and dependent data marts. These applications can be grouped by their nature to construct a
panoply of DSSs based on processing patterns: data (data driven DSS), knowledge
(knowledge driven DSS), model (model driven DSS), and document (document driven DSS)
[157].
Data mart, on-line transactional processing (OLTP) and on-line analytical processing (OLAP)
are three essential tools supporting dynamic data warehousing, a system used for data mining,
analytics, visualisation decision making and reporting, information and knowledge fusion, as
illustrated in the system data management architecture shown in Figure 3.4. Data systems can
be transactional (OLTP) providing source data to data warehouses, or analytical (OLAP)
helping to analyze it, or master data (MDM) linking all of the critical data to one file.
67
intuitively and rapidly available to end-users for routine and ad hoc analysis. Data
warehouses are collections of integrated data from different sources, remotely distributed,
made timely available to operational systems supported by different types of DSS [159, 160].
b) Data Migration
Data migration is a process aimed at extracting, transferring and loading data within a multi-
distributed system architecture when replacing server or storage equipments or network
devices, maintenance or upgrades, or data centre relocations. This process, which is
performed automatically, is made up of five steps: design, extraction, cleansing, loading,
verification and testing.
Of great complexity in this process is the data cleansing to improve data quality, eliminate
redundant or obsolete information, and data synchronisation, harmonisation and verification
to ensure data consistency between source and target storage, and also that all ordered and
unordered data was accurately translated and complete, identifying data disparities and fixing
data loss. The scope of data migration can be the storage, or the database, or the application,
or the business process [161].
c) Data Mart
Data mart is a corporate data model, representing a sub-model of data warehouse or designed
as a component of the master and distributed data warehouse, aimed at subject-oriented
reporting and analysis, and containing data typically expressed in some combination of
criteria covering a functional area such as geography, organisation and application, or
business functions. The data is structured in a data model representing a database composed
of several tables that support end-users analysis leading to more informed decisions for
various functional segments of the organisation. Operational systems that handle the day-to-
day transactional activities supporting the different management levels are the primary
sources of data for data marts. However, the primary data may need to be processed to meet
the format requirements of data marts. This intermediate processing consists of data
translation and formatting using a variety of filters and formulas for data aggregation [162].
It is essential to define the business process requirements in terms of users varied information
requirements, to better understand how decisions are made within each functional area of the
organisation, taking into account the changing needs of end-users. These requirements
include:
68
Functional criteria such as specific data content, relationships within and between groups
of data, the logical relationships among the objects, the data transformations required,
frequency of update, priorities, and level of detail, and
Technical criteria that specify where you get the data that feeds the data mart, and the
most effective and efficient ways of storing and retrieving the data.
Data marts are based on the use of metadata containing information about the data, and
providing a directory of technical and functional views of the data mart. Metadata includes a
description of the data, its format and sources, and has two aspects:
Technical supporting the data mart management by indicating the data acquisition rules,
explaining the transformation of source data into the storage format in the target data
mart and the data updating mechanisms, and specifying the schedule for backing up and
refreshing data, and
Functional indicating to end-users what information is contained in the data mart, and
how to access them.
Among other requirements for the design of data marts, the problem of data creation,
maintenance and security remains essential. Data can be created locally or in a central data
warehouse. Data marts can be dependent and independent, depending on the data sources
feeding the data mart and their frequency of use, and the communication resources required.
Independent data marts are standalone components supporting the Extraction-
Transformation-and Loading (ETL) process aimed at the moving of data from operational
systems and external sources, and it’s filtering and loading into the data mart. Dependant data
marts rely on a master data warehouse from which data can be extracted and distributed to
local data marts.
Consistent schemas and data formats are required to support rich varieties of distributed and
corporate data views enhancing interoperability, achieving data consistency and increasing
data usability. These schemas contain measurable or countable fact data.
69
and manage transaction web service oriented applications aimed at data entry and responding
instantly to end-users requests using different computer platforms in a network, and
monitoring the transactions processing for a better coordination of data services. They ensure
that data is not updated while in use by other users (concurrency controls) and all steps of the
transaction are satisfactorily completed (atomicity controls) [163].
OLAP uses dashboards aimed at facilitating ad hoc analysis, enabling quick, easy data access
from point-and-click interfaces for rapid decision making.
g) Data Modelling
A data model is a diagram that illustrates the relationships between data representing the
description and behaviour of the different organisational entities. Data is stored in multi-
dimensional databases organised using multi-dimensional structures to configure hierarchical
and functional relationships between the different entities. The data attributes are defined as
dimensions used in terms of intersections to represent the data selection required in data
70
extractions to populate data marts. Sub-attributes can be used when attributes refer to
different categories or a period of time. The conceptual data model represents the highest-
level relationships between different entities, whereas the enterprise data model being similar
to the conceptual data model; represents the functional data requirements of a specific
activity involving the entities. The logical data model which is used for the creation of the
physical data model, illustrates the links between the specific entities, attributes and
relationships involved in an activity [165], and used in data predictive analysis and
visualisation.
h) Predictive Modelling
Predictive analysis is a technique used in data mining, predictive analytics and machine
learning to extract information from data and use it to predict trends and behaviour patterns.
Predictive modelling is a process used in predictive analytics to build a future behaviour
model in terms of forecasting probabilities and trends by analysing current and historical
facts to make predictions about future behaviour states and identify unknown events [166].
i) Data Visualisation
Data visualisation is an essential function of data management supported by intelligent
software. It’s aimed at enhancing the understanding of data, illustrating its significance and
interpreting its different patterns, trends and correlations by the use of interactive capabilities
and indicators based on visual concepts which include infographics, dials and gauges,
geographic maps, spark lines, heat maps, and detailed bar, pie and fever charts [167].
a) Association Rules
Aggregation and analysis of data enables the identification of if/then patterns by using the
concept of support and confidence, to establish on one hand, the support defined here as how
frequently the items are identified, and on the other hand, the confidence defined here as the
number of times the if/then statement has occurred in the data collection.
71
If/then statements are called association rules in the sense that they help establish
relationships between seemingly unrelated data in a database or other information. If and then
are the two parts of an association rule: the antecedent condition (if) is an item found in the
data, and the consequent action (then) is an item that is found in relation with the antecedent.
Association rules are useful for analysing and predicting the organisational and functional
entities behaviour.
Of great significance in the analytics process is the continuous effort for increasing the
prediction accuracy and prescribing better decision options by performing iteratively the last
two steps of the process, considering new data to re-predict and re-prescribe.
Different types of knowledge compose a domain, and differ from each other depending on
the domain complexity. Knowledge includes structural and tacit knowledge, and it’s of great
importance to understand where knowledge creation occurs, and how it can be represented,
created, elicited, converted, integrated, shared, transferred and applied to enhance practice.
Of great significance in KM is the knowledge codifiability and consciousness of its use [170]
72
and knowledge can be classified accordingly to the level of management of its codification
and use: operational, tactical and strategic.
Knowledge can be explicit/propositional, and its production can rely on prior acquired skill
(procedural knowledge) and judgment (based on experiential knowledge). It is iteratively
used to formulate new knowledge in the form of propositional knowledge in its explicit form.
Tacit knowledge is converted into explicit knowledge using knowledge-based processes, and
the continuous interaction between these two types of knowledge is essential to the creation
of new knowledge [171]. However, some aspects of tacit knowledge remain ineffable and
deal better with non-formal knowledge processes.
a) Structural knowledge
The elicitation of structural knowledge consists of the domain declarative knowledge,
establishing knowledge links or relationships between the entities’ attributes represented in a
cluster graph. Structured knowledge is coded and represented as illustrated in Figure 3.5,
by a map of clusters called individual knowledge graphs, in which each graph shows inter
and intra cluster links representing the cluster edges between nodes. Measurement of the
graph size is used for the assessment of the reliability and validity of the knowledge
elicitation process, and establishing the link with explicit knowledge [172]. It is based on the
use the following concepts:
73
Size: number of nodes,
Degree: number of edges,
Density: number of present edges in comparison with clusters and graph,
Diameter: longest path in the network, and
Number of clusters.
b) Tacit Knowledge
Complex real decision problems are associated concurrently to both explicit and tacit
knowledge due to their difficult characteristics such as:
Different methods and practices can be used to make knowledge easier to detect, and use it
for the search of patterns that might lead to new insights. These include asking individual
team members to provide their understanding of the knowledge processes they are involved
in, and also gathering all the domain knowledge embedded in the documents in use by these
processes, and in use through external networks [169].
74
3.3.3.4 Group Support Systems
Group support systems (GSS) referred to, also, as group decision support systems (GDSS)
support integrated collaborative computing, combining communication, hardware and
software innovative technologies and including smart devices and intelligent agents to
enhance team-based decision making and decision quality, and improve group performance
and satisfaction in networked distributed physical and virtual environments. They are based
on the use of a context-aware filtering process for adapting content delivered to individual
and mobile users, identified by context-aware profiles, expressing the users’ preferences for
particular context situations involving them [139].
GDSS support and structure group interaction, and facilitate collaborative learning and
training using virtual teams, computer-supported collaborative and distributed work including
e-group meetings. Of great significance in the design of GDSS is the release of three
moderators from consideration: group size, time and proximity, focussing mainly on task
type, level of technology and the existence of facilitation through groupware and GSS tools
to enhance the group adaptation process in a generic support perspective.
75
generic, adaptive and flexible enough to be used differently in a variety of design decision
problems, concurrently integrating multiple cooperative knowledge sources and models.
Collaborative support systems focus on decision knowledge modelling and the use of support
schemes for their representational capabilities for non-explicit (tacit) knowledge involving
learning and skills. They require a robust set of collaborative new web-based tools for use
within a network based environment. These tools provide collaborative decision makers with
rapid, accurate access to multiple sources of information, coordinating information and
intelligence exchange directly between numerous organisational entities and agents. They
include integrated smart devices involving intelligent devices and sensors enabling
environmental data capture and obtain the synergy of their integrated use in terms of
intelligence-gathering capabilities [174].
The development of web based knowledge services is required to invoke existing knowledge
models for enhancement through adapting, expanding, customising and improving them by
processing different context situations. This process aims at providing a generic model or
system creator that can help in the sharing of knowledge amongst different domain experts,
and the converting or translating of abstract ideas contained in tacit knowledge into models
needed to build knowledge process models. These models are classified in the form of
ontologies that are used as communication schemes tools in the web linked modelling
environment, to ease and facilitate the interaction between network agents. This interaction is
part of the ontologies management that enables the identification of acceptable agent
behaviours that validate the effective use of embedded knowledge models to solve decision
making problems in the knowledge domain, and also the different steps of the knowledge
processes [174].
76
c) Collaboration Environment Design Support
Of great significance in the design of collaboration environment design support illustrated in
Figure 3.6, is the inter-layer transformation sequence, containing the following three steps of
transformation of:
The collaboration environment design process is based on the continuous iteration of its
different steps, subordinated to the validation of the different design levels. The collaboration
model based on the context representation and context description is associated to the success
control factors and the goals hierarchy. The business process model is associated to detailed
work tasks support and agent interaction protocols to a detailed process model and data
model. The system design layer structures the process service capabilities, and is associated
to the different agent software, supported by the detailed data flows and the database design.
77
interacting in a multi-distributed agent system, whereas the user’s collaborative context
includes the concepts of group, role, member, calendar, activity, shared object, and process,
which are the collaborative elements.
The number of physical and collaborative elements can vary depending on the organisational
settings and the complexity of the collaborative activities. In the context model representing
the elements functional association, the elements Member and Group can be associated to the
78
Agent and Web based service in an attempt to represent the service composition model,
assuming an intelligent agent is an element of a service designed to describe a set of system
functionalities, and a collaborative activity involves a set of services.
The profile model is based on a set of profiles associated to a context description and
element, depending whether the preferences are application context based or agent related.
Each profile is linked to a set of preferences which might be shared simultaneously by
different context applications and agents. An agent preference is described by a content
containing an event and a result, and indicating a format and its different characteristics, as
illustrated in Figure 3.9, whereas an application profile is a preference request. The
preference matching is supported by the preference association process (PP) [176].
79
Figure 3.9: Context element profile and preference model.
80
a variety of computer-based models, to make correct real-time intelligent decisions during
process modelling, knowledge modelling and decision support [174].
Real-time collaboration supports the process of knowledge processes refinement through the
use of integrated advanced computer and information technologies, to enhance the discovery,
learning, and knowledge acquisition process in general, and augment and empower the
participant’s cognitive processes in particular. Of great importance in this support is the need
for processing real-time data with the aim of structuring, identifying, deriving potentially
relevant knowledge, sharing the process of constructive knowledge refinement, involving
complex levels of interactivity that guide agents to establish key influential relationships
between decision variables and derive new decision options. During the real-time decision
collaboration process, web-based support tools and information strategies are needed for the
knowledge construction, enabling agents fully interconnected and iteratively interacting in a
group decision system, establish new associations between concepts, actualise, refine and
synthesise data and information to create new knowledge for effective incremental real-time
decision making.
81
hierarchical multi-distributed environment. They support cross-functional collaboration
within and between organisations [178]. Their organisational boundaries are extended by
computing and communication technologies, overcoming the group teams’ geographical
restrictions for the share of systems, processes and use of devices.
The concept of a virtual team is based on the use of a model defined around three aspects:
purpose, people and links. Of great importance in this model is the significance of the
purpose of the team composition, which can be translated in terms of common goals,
individual tasks and results [179]. These tasks can be synchronous or asynchronous requiring
the use of coordination mechanisms.
a) Empowerment
Empowerment of virtual teams leads to better learning and process improvement when
performing highly complex tasks in organisations with high levels of dynamic, complex
change and environmental uncertainty. It is based on face-to-face to no-physical interaction,
and related to virtual team performance. Defined as increased task motivation, team
empowerment emerges from collective cognition [180]. This motivation results from the
team member’s collective assessments of their inter-independent organisational tasks, and the
conditions under which they have been performed. This is reflected by the different
behaviours of the team members with a particular emphasis on their individual level of
motivation.
Individual and team empowerment requires the use of consensus models to enhance the
performance of team processes in terms of improved communication between the different
members, and conflict elimination. This involves interdependent actions to be taken by the
team members to achieve team goals [181].
b) Virtualness
Virtual teams are a viable alternative to face-to-face work due to the importance of the
information technology role in diminishing physical and temporal boundaries, and the
dissemination of knowledge across organisations, enhancing the dynamics of knowledge
development and transfer, and organisational learning [182].
82
the member’s location and their work cycle synchronicity. Other influencing aspects such as
culture are not included in this framework. In this research, virtualness is associated with
collaborative information technology.
c) Virtual Tools
Of great significance in the design and selection of virtual tools is:
the full understanding of the extent to which the virtual team members use the support
tools to coordinate and execute their organisational tasks and team processes,
the amount and type of information provided by these tools and exchanged by the virtual
team members, and
the concurrency of events that may be connected by causality or just by meaning, not
having a causal link [183].
The concept of synchronicity associated to meaningful coincidences, is of great importance in
the view of the structure of reality which represents a causal connection of coincidently
meaningful events.
Knowledge domains are fragmented, and the emergence of service systems has tremendously
contributed to linking their collaborative activities and supporting the interaction among the
various organisational agents which include all resources and shared information, all
connected internally and externally across networks. This interaction takes place at the
service interface between the service provider and service users.
Of great importance in the services system development and delivery, which includes service
management and operations, is the complex multi-distributed architecture to support
connecting the content of core and provided services, and their availability across the
network, in the form of customised pluggable, reliable and robust integrated functional and
operational services. The services availability supporting process, oriented to problem solving
to integrate external evolving requirements, is supported by dynamic reconfigurable systems
83
which are self-adaptive and based on constructing self-assembling and self-adaptive
components, benefiting from the increasing usability of the panoply of adaptive advanced
information, computing and communication technologies. The implementation of such
systems requires a component model and a service registry to support a dynamic service
management.
Services include third parties services (provided services) and internal services, which are
called core services, and are those managing knowledge, data, tasks, and communications.
84
Their invocation poses the problem of all-the-time service availability, taking into
consideration some key factors, such as, services failures, server failures, services
maintenance and application program control problems.
Available services are identified through services discovery, prior to their composition and
execution. During the composition process, they can be orchestrated, i-e grouped together or
packaged to automate a process, or synchronize data in real-time. The size of the package in
terms of level and types of services, and their required resources, are key performance factors
of the orchestration process which requires dynamic and intelligent allocation of resources.
Of great importance in the service management process is the process workflow supporting
the appropriate selection of the business logic modules linking the service providers to assist
services discovery, composition, and execution before elaborating and delivering results to
end-users [188]. Of great significance in the services invocation is the service knowledge and
information available to end-users prior to, in terms of what is the service for, the information
requirements before the service execution, the pre-defined processes and procedures, and
whether the service will fulfil the users’ needs.
85
b) Architectural Models
The services system architecture is based on the use of the concept map defined as a
graphical way of representing key concepts and relationships [184]. This architecture is built
around several component models focusing on the creation and use of services by agents.
These architectural models are:
Message oriented model: focuses mainly on messages, message structure and message
transport,
Service oriented model: focuses on aspects of service and actions,
Resource oriented model: focuses on resources used by services, associated with their
owners and users, and
Policy model: focuses on constraints on the behaviour of agents and services.
c) Service Composition
Existing independently developed software agents and hardware smart components, called
capabilities, can be grouped in different web services that can be used to support the
automation of a business or knowledge process. These services can be composed or
recomposed to build a wide variety of solutions with different types of design principles.
Once a service has been composed or recomposed, it can be reused, and doesn’t require any
design change. A web service has functionality, and is implemented by a concrete agent
associated with one or several capabilities.
d) Service Registry
Services end-users and providers work together through a service registry along the service
bus. Depending on the importance and complexity of the knowledge domain, the service
framework supporting a web-based collaboration might structure separately services per
86
domain using parallel services buses. The service registration requires the implementation of
a service policy development aimed at facilitating the collaboration across the Web of service
consumers and service providers [186].
The information distribution and knowledge delivery through services within a collaboration
process poses the problem of services learning support in terms of registered services and
how to use them in relation with identified collaborative activities, relying less on human
assistance or guidance. This problem shows the design complexity of personalised learning
support in dynamic and heterogeneous services learning environments, creating adaptive
learning environments for the Web towards the creation and customisation of web personal
services. This complexity is reflected in the key learning requirements for the support of
collaborative activities to enhance human limitations of information processing with web
service-based technologies [190]. These requirements are:
Learning support complexity requiring the support of different learning cognitive styles.
Individuality and adaptability support consisting of adapting to the ability and skill level of
individual users.
Interaction support enabling users to be integrated in the interaction process.
Activity and assessment support reflecting the users’ participation in the service
understanding, execution and improvement.
87
3.4 Reasoning Support for Specific Domains
The reasoning required to support specific domains is twofold: knowledge and model based
reasoning.
A real-time web-based agent tool and wireless sensor network process monitoring system
supports the implementation of e-surveillance systems in which a model-based component is
included in order to decide the rules embedded in the planning and surveillance processes.
This component integrates a hybrid knowledge/case based system (KBS/CBR) using a
combined ruled and case based approach to select appropriate decision making support
models [191].
Emerging intelligent hybrid decision support systems supporting ad hoc WSNs are based on a
flexible, adaptive and hybrid multi-agent distributed architecture composed of a number of
heterogeneous components originating from a variety of integrated innovative technologies.
This architecture allows support systems to embody deep up-to-date knowledge elaborated
iteratively from causal loop modelling and reasoning under certain uncertainty, using systems
dynamics and simulation modelling for predicting agent behaviour under different policy
strategies, and increasing efficiency and accuracy in agent management. Additionally, it
employs local data pre-processing and statistical techniques for intelligent data analysis and
88
reduction which are based on the use of database techniques for data storage, analysis and
representation [192].
Different decision making tools such as Cognitive Maps (CM) and Causal Loop Diagrams
(CLDs) are extensively used to understand the feedback structure of the system, in terms of
inference on the system behaviour.
Model based reasoning is required when using explicit models for emerging issue tracking.
Such implementation in the real world is based on the use of an engine to associate the model
knowledge with observed data to infer conclusions at run time. These models are stored in a
model base, and can vary in their nature (qualitative or quantitative), represent normal or
unusual situations, and differ in other characteristics (static or dynamic, deterministic or
probabilistic, causal or non-causal, compiled or first principles).
3.5 Summary
DSS have extensively evolved through the continuous integration of advanced
communication and information technologies, procuring self-organisation and real-time
collaboration. The full automation of real-time decision making activities using services
systems is an important development based on substantial improvements made at different
levels of decision managements: data, knowledge, models and virtuality.
The acquisition and creation of valid and updated knowledge extends far beyond the
transmission of knowledge and the development of advanced information-processing skills.
The knowledge procurement is the base for the creation of intelligence distributed among the
organisational entities which include people, devices, processes fully interconnected via a
panoply of networks.
89
Chapter 4: The Generic Design
4.1 Introduction
The design under consideration in this research concerns the elaboration of generic solutions
for indoor distributed detection applications requiring the use of HIDSS to support the design,
configuration and deployment of ad hoc integrated WSN RFID used for real time detection
data capture. Indoor detection requirements result in detection specifications that need to be
met by detection equipment installed in the covered indoor premises.
The HIDSS aims at data, information and knowledge fusion which consist of supporting
information scanning and emerging issue tracking, using an open loop control process to
synthesize and generate vigilant information processing tasks in knowledge-based reasoning.
This is based on applying both data and model driven approaches to elaborate and structure
knowledge and group decision making processes. This study is in the context of strategic
planning and relates to making real time decisions based on how best to utilise limited
intelligence, surveillance and reconnaissance (ISR).
Strategic real-time decision making supports the organisational need to remain at all time,
alertly monitoring for weak signals and discontinuities in the organisational environment in
terms of data sensing, storage and communication across networks, identifying emerging or
established threats upon issue, goal seeking and sensitivity analysis.
This chapter describes succinctly the main elements of the generic design conceptual
framework for the development of HIDSS, with a particular focus on the domain analysis
method and the knowledge domain analysis. The domain analysis method is essential to
create the use and re-use of the design for different domains, and the knowledge domain
analysis is a necessary step to elaborate generic knowledge tasks of the domain of interest,
and reduce and solve their complexity. A specific knowledge domain has been suggested for
a case study in the area of indoor distributed detection that includes sensing, localizing,
tracking and monitoring, for a wide range of applications, including emergency planning for
fire detection, control access and other related issues.
90
4.2 Motivations
The motivation behind the use of generic design framework is the need to identify re-usable
modules of processes, hardware and software to:
̵ Reduce the domain and system analysis complexity by applying the separation of
concerns principle,
̵ Analyse the individual knowledge domain models during the domain analysis to
better understand their mutual influence and dependency during their implementation,
̵ Analyse the system modules behaviour working solely before their integration in the
system, checking properly the data structures appropriateness and the effectiveness of
the algorithms and models,
̵ Better understand the system modules mutual interaction for a structured agent design
and service composition for a flexible and adaptive distributed implementation,
̵ Facilitate the incremental design and implementation of the system architecture
decoupling the fusion functions, and
̵ Reduce the system design and implementation costs.
These distributed services integrate concurrently third party data and models, and share tools
and processes jointly supported by web-based GDSS linking end-users. They support generic
solutions in the domain of interest in which their various distributed application are structured
in the form of services supporting fusion functions for the elaboration of knowledge
processes, models and data, and support system capabilities as illustrated in Figure 4.1. The
proposed generic design conceptual framework focuses on service interoperability which is
based on generic and recursive design in problem solving using the different management
91
components (knowledge, model, data and support capabilities) to deploy, operate, integrate
and aggregate end-user services.
In dynamic environments, only real-time data for specific domains can provide decision
makers with relevant knowledge using a panoply of models structured in a model base, to
provide users with predictions and explanations as shown in Figure 4.2. Up-to-date and
trusted real-time information remotely available, is imperative to making actionable decisions
that enable heterogeneous organisational units to adapt their behaviour to environmental
changes. These organisational units interact with each other, and their interaction can result in
some influence between them, requiring a comprehensive approach to their coordination and
control. Such an approach is illustrated in the dynamic collaborative framework shown
92
Figure 4.3. This framework is generic to different domains, and depends on the sources of the
domain knowledge as plug-ins, depending on the domain knowledge sources and the tasks
under consideration. It is based on defining a strategy for the entities deployment and
configuration, and a control policy effectively and efficiently supported by hybrid intelligent
decision support systems providing a logical support process for agent group composition and
negotiation, assessment of strategic architecture configuration factors, and performance
measurement.
The generic design advocated in this research to support the development of support systems,
which is based on the use of a generic data support illustrated in Figure 3.4, and represented
by the WSN database in Figure 4.2, increases the reusability of software agents implemented
to meet the requirements and fit within that the physical system design by creating a wide
range of agent behaviour patterns and identifying those required for the control system aimed
93
at enhancing both strategic and operational real-time decision-making effectiveness and
efficiency. Their generic terms, which are used to represent the diagnostic knowledge in a
domain, are smart devices and agents composing hierarchies, goals, rules of association,
behaviours, generic process tasks and subtasks and performance.
Figure 4.3: Hybrid intelligent web-based service decision support system architecture
The major challenge in the definition of requirements for this support of changes adaptation
process is the integration in a hybrid decision making model, of the user context conditions, i-
e regardless of what applications created and updated the data, what platform they’re running
on, or where they are accessible from, i-e stored in databases or locally stored in the network
in smart devices or smart sensor devices, and in backend systems.
The above system architecture shown in Figure 4.3, integrates the following functional
components:
1. Web data services defined as business processes describing the different context
situations (processes and events) of the application domain, result in support data
requiring the specification of the data sources identified and wirelessly connected to
94
an ad hoc network that integrates both RFID and WSN, and homogeneous and
heterogeneous devices.
2. Real-time data capture occurs following the configuration and deployment of the ad
hoc network, and results in networking processed data and/or real time data stored in
the server, outside the network.
4. Aggregation data mining provides the required support for data fusion which consists
of discovering patterns in large data sets and extracting information presented in a
readable or understandable structure that enables its association to other information.
5. Analytics are necessary support systems elements dealing with data analysis using
concurrently descriptive, predictive and prescriptive modes to provide a holistic view
of the context situation, anticipating on what has happened, what could have
happened and what needs to be done. This is supported by exiting models and
knowledge rules, and can result in the elaboration of new models and knowledge
rules.
The last three components support information and knowledge fusion functions.
95
process [194]. These DSS embodying panoply of decision model structures illustrated in
Figure 4.4, are of the third system generation characterised by process intelligence.
Of great significance in the design of hybrid intelligent agent-based systems for specific
domains in heterogeneous environments, is the data lifetime that poses the need for its
storage and accessibility requiring intelligent mechanisms that lower CPU and storage
capabilities, for both mainframe and distributed environments, to ensure the data replication
and availability supporting processing continuity in the case of partial or total network
failures. The real-time replication of data and its changes poses the problem of robust and
scalable performance with acceptable transactional integrity and consistency, and full
recoverability. This problem is even more challenging when replicating large data volumes
requiring comprehensive monitoring and diagnostic capabilities to effectively support the
data acquisition, aggregation, consolidation and migration, and ensure its full visibility [195].
Existing decision support systems are lacking capabilities in several areas such as:
The incapability of supporting various individual and group different roles during the
decision making interaction,
The difficulty of collecting in real time the required big data from heterogeneous
sources,
96
The processing model limitations and assumptions imposed by context aware
uncontrolled factors and environments,
The difficulty of defining and quantifying decision variables, and discovering and
extracting knowledge, and
The handling of technological knowledge resulting from the use of existing and new
technologies involved in their design.
The novelty in this research is the generic recursive design framework and the hybrid system
architecture built around:
The use of hybrid integrated elements in the same recursive design framework for the
development of a large enterprise-scale data warehouse with various distributed
intelligence applications that covers the different domain contexts of a knowledge
domain,
The real time processing of most of the big data at the source using in-networking as
part of the data fusion, and the communication of other and aggregated data outside
the network to support information and knowledge fusion that focuses on knowledge
discovery and extraction, and
The decoupling of data fusion from information and knowledge fusion functions.
The essential aspect in this novelty is the focus on real time processing of big real time data
from remote heterogeneous sources for real time predictions and instantaneous decision
making. This has requested from the system architecture to incorporate hybrid elements, and
the contribution of this research lies in the extension of the hybridity concept which is
threefold: conceptual, hardware and software integration.
a) The conceptual integration deals with the creation of recursive generic design
frameworks based on the concept of distributed multi-agent systems and hybrid self-
tuned intelligent decision systems to support:
97
b) The hardware dimension is related to the possibility of using:
Use of human and software agents in the same cooperative decision making process,
Integration of different types of decision support systems, as listed in Section 2.8, and
Although domain experts is required at the launch of the hybrid intelligent decision support
system, they will be replaced by intelligent software agents for real time knowledge
discovery and extraction.
Data mining and knowledge discovery support big data processing and management,
and the provision of a link between transactional and analytical data by identifying
and analysing patterns that enable information to be structured as the basis for
knowledge discovery.
98
Knowledge which include all what agents perceive, discover and learn, provide the
knowledge domain contexts understanding and support the elaboration of hybrid
decision models that provide the decisions.
Decisions which are influenced by the knowledge discovery and extraction process
that provide the decisions explanations, support the meeting of agents goals and
confer the agents existence, roles and goals to support the dynamic changes occurring
within the multi-agent system. Decisions outcomes which result from the decision
making, contribute to the provision of the decision understanding and plausible
explanations, taking into account the impact of latency and/or absence of causality.
Explanations are based on the decision outcomes which take into account the decision
context and conditions.
The decision making process is based on three iterative stages: intelligence aimed at
understanding the problem, design concerned with elaborating the different decision
alternatives and their plausible outcomes, and choice concerned with ranking the decision
alternatives based on elicited preferences.
Agents can be smart and intelligent devices, and software components aimed at performing
individually or in a group, one or several reasoning tasks which differ in their nature. Smart
and intelligent devices reside in the sensor nodes which are wirelessly connected to the sensor
nodes of the ad hoc network, whereas software agents are generated and implemented in the
hybrid intelligent decision support system to compose web-based services that support the
knowledge domain processes.
In the context of multi-agent systems, these tasks include goal reasoning, plan coordination,
failure recovery, goal-plan conflict and resolution, and other specific tasks like computational
logic and linguistics, agent collaboration, cooperation and arbitration, simulation, agent
dialogue and communication, agent replication or destruction, automatic code generation,
agent learning, etc...They are performed under the control of agents structured in a hierarchy
99
that confers the different roles for filtering information, identifying individual or group tasks
with similar interests, and automating repetitive tasks.
Conflict resolution is based on using three different strategies: negotiation, arbitration and
mediation, which are supported by a diversity of intelligent techniques for resolving conflicts
of knowledge. Among the main techniques, Bayesian Network, Case Based Reasoning,
Expert Systems, Fuzzy Systems, Genetic Algorithms, Ontological, and Searching based
techniques, can be used in combination to reduce knowledge conflicts between intelligent
agents.
Of great importance in the web-based data support aimed at knowledge discovery, is the
availability and accessibility of log-based up-to-date critical information through a multi-
media comprehensive support set to efficiently and effectively perform remote monitoring
using services web-enabled platforms. Web-enabled intelligent data support services aim at
100
achieving near-zero downtime performance and can effectively minimize the massive
information bottleneck that exists between sensor nodes and backend network support
systems.
The demand to connect with backend decision systems supporting distributed ad hoc network
configurations is increasing. This connection aims at synchronizing remote monitoring
systems and distributed applications services, and supporting horizontal integration to
provide information integration between a lower network level and an upper decision making
level.
The trend of the future e-network for the management of data support is to bridge the gap
between front-end and backend data support processes. There are various issues related to
design and deployment of a web-enabled distributed control application platform for
automated activity surveillance where the built-in web composite services enable the
execution of distributed multi-agent control applications through a web browser [198].
101
Figure 4.5: System generic design framework
Service description,
Service help,
Service support and
Service delivery.
Service cost and pricing is an additional step that is carried out to show how the service can
be supported and delivered.
102
constraints in the models, and examining the dominance structure of all alternatives to apply
the weak dominance technique for alternative selection at the choice step of the decision
making process.
Core services layer, containing high level features and generic core services frameworks
which are necessary to enhance the service framework effectiveness and efficiency. High
level features include:
Peer-to-Peer Services to support multi-peer connectivity to initiate communications
sessions with nearby devices.
o propagating the data changes and ensuring the data integrity and the consistency
of the relationships between objects, and
103
re-use. This analysis principle is advocated for use in the proposed system generic design
framework to identify and model the knowledge functions in software agents, eliciting the
support system requirements and system architecture. The proposed research framework aims
at supporting a wide range of goals, domains, and involved processes in information scanning
and emerging issue tracking and monitoring. A combination of domain analysis techniques
are proposed for use, depending on the different particularity and characteristics of the
knowledge domains of interest. The representation of the knowledge domain models is based
on the use of object-oriented models or data models, depending on the analysis approach
considered (model or data driven).
A domain model is a goal oriented functional representation of the knowledge and activities
related to a particular domain application. It aims at simplifying the design process by
identifying recurring behavioural patterns in the application domain, and promoting positive
interaction between the system agents. It enables the development and standardisation of
software agents.
104
The system assessment focuses on identifying different types of agent’s behaviour changes
that help determine key parameters or influencing factors affecting their individual behaviour
and the global result of their interaction.
In the knowledge domain analysis method, the focus is on understanding and describing the
environment where the knowledge functions or modules are to be implemented and
performed to meet the knowledge requirements. This description will result in separating the
system requirements into two classes: functional and non-functional requirements.
Functional requirements are the description formalisation of the activities performed by the
system when fully implemented and operated. These activities contain both:
̵ Specific business tasks dealing with the context applications and putting the emphasize
on all that might change within the system environment anticipating the modelling and
the integration of these changes without affecting the system integrity , and
̵ Technical tasks needed to support the technical requirements of the system, covering
from the deployment of the network to the control of the network activity.
Non-functional requirements are of important consideration in the system life time, as they
concern the key success factors related to the system security, efficiency, effectiveness,
operability, reliability and performance.
105
4.4.2. Syntactic Conflict Resolution
The rule-based approach if-then is selected to support system generic design for generating
syntactic conflict resolution strategies when eliciting knowledge requirements using software
agents, mainly when knowledge is viewed at the appropriate level of service composition.
The consistently use of meta-rules enables the production of a highly relevant body of
knowledge without any need for conflict resolution. These meta-rules are basic agent
decision rules for making service composition rules, supporting more a pro-active adaptation
of rules to a changing data context aware environment than a re-active adaption through
approximate adaptation through evolution, incorporating consideration of how to adapt the
service composition policy through time. This process is supported by a continuous analysis
of cumulative changes in agent’s interaction and behaviour for an adaptive control, using a
set of techniques for automatic adjustment of the controllers in real time.
The adaptive control aims at achieving or maintaining a desired level of performance of the
control system in terms of real-time alert monitoring, mainly when integrating the parameters
of the dynamic model supporting unknown parameters, events, and/or device deployment and
agent’s behaviour change in time.
Conceptually, the problem domain is composed of declarative knowledge which provides the
knowledge base, and the problem solving method constituting the inference engine. This is
done by using the notion of reasoning by mental modelling based on internal individual and
group models to make generalisations true. These models embody both spatial and temporal
relations and causal structures linking the events and entities of the domain components.
They are based on merging different internal models of the situations, events and processes
representing the domain expertise.
106
they contain. The resulting embodied structured knowledge is organised in building blocks to
form the elementary generic knowledge tasks, observing a hierarchical classification and
concluding matching hypothesis testing to establish context situations knowledge-directed
information. This organisation is based on a top-down approach decomposing iteratively the
domain into components associated to entities, still obtaining non-decomposable components.
107
Figure 4.6: Generic knowledge task specification
The generic design of core processes describing the domain knowledge is a classificatory
problem-solving process based on a matching process, aimed at structuring the nature and
organization of knowledge and supporting the control processes required for hierarchical
classification.
Complex generic knowledge tasks characterise complex knowledge domains with complex
data and/or a large number of objects with high interaction level, covering different real
world knowledge situations. They require further decomposition into components that are
more elementary in terms of knowledge structure homogeneous and control regime for
108
behaviour changes, effectively establishing paths and conditions for information exchange
between agents associated with the generic tasks. These are described in action and control
tasks as illustrated in Figure 4.7, showing the example of complex generic task knowledge
aimed at interpreting a sensor data reading.
The generic knowledge tasks decomposition process is an iterative process which requires the
use of different tools that include procedural reasoning, logical network-based inference
mechanisms supported by description and predicate logic, and conflict resolution strategies.
This decomposition process results in more elementary problem-solving tasks for greater
clarity. These tasks generate a body of knowledge that is used to reason and draw
conclusions, to determine the truth or falsity of rules.
The ability to decompose complex generic knowledge tasks is based on using an incremental
flexible knowledge architecture of generic tasks containing a data abstraction aimed at fully
supporting the functionality of the knowledge-directed data inference component explained in
the following section.
109
definite, very likely and definitely not. The definite state covers a task knowledge association
based on knowledge-directed information passing, elaborating generic knowledge association
rules and establishing default values for attributes that can be used to build up inferring
strategies. This type of reasoning supports the various rules about how the datum might be
obtained, including setting the instances default values. Knowledge-direct passing values is
supported in its reasoning by a model that is used to executes the task by considering
attributes of some knowledge entities initialised without detailed knowledge of the
environment to determine the attributes of other data by inferring from available data.
The knowledge generic tasks elaborated from the domain analysis and structured as building
blocks are used for other types of problem solvers in the same or other knowledge domains,
using the principle of routine design to build the object structure. The object knowledge is
organized as a hierarchy of generic tasks, using the entity-component hierarchy of the object
structure. Each task is associated to a form of knowledge, an organization, and a control
regime which define the domain objects specifications.
The design of the object specifications is associated to the definition of functionality of its
components and how they relate individually and globally to the functionality of the system
in terms of how associated knowledge affect components functionality, establishing their
state changes which illustrate the agents behavioural changes.
Building blocks are used to structure and represent the various knowledge-based systems
involved in the generic design framework proposed in this research. The advocated hybrid
intelligent decision support system required for the implementation of this framework,
supports the language needed to represent domain knowledge, the knowledge generic tasks
description, the knowledge object description and manipulation.
Capabilities for :
110
o manipulation of knowledge objects and software agents,
o Inter-process communications,
These different knowledge generic tasks elaborate the various knowledge procedures aimed
at making inferences to produce solutions to problems. They are integrated using a
knowledge task description language, enabling the communication between the different
building blocks, and organising the system agents’ interaction supporting the domain
knowledge processes implemented as workflows composed of individual tasks and execution
conditions.
The proposed hybrid intelligent decision support system is a distributed multi-agent system
supporting different concurrent activities involving different classes of agents that must
achieve high level goals while remaining reactive to contingencies and new opportunities.
These activities concern principally the environment monitoring and the reaction to dynamic
behavioural changes and exceptions.
The knowledge generic tasks description language, as any task language description used for
robot control (TLD), provides syntactic support for task decomposition, synchronization,
execution monitoring, and exception handling. This language supports the elaboration of
design plans for the management of the domain knowledge resources. Their real-time control
is supported by the system agent's interaction when scheduling and synchronising concurrent
activities, recording their behavioural changes while controlling actuators and collecting
sensor data, and reacting to exceptions.
In complex knowledge domains, procedural reasoning (PR) can be used to facilitate the
identification of tasks, their time and order to be allocated to agents for execution, resulting in
structuring the agent task tree or the execution flowchart containing the synchronisation
111
constructs. The agent task tree has leafs that are called the task execution commands executed
by the task control management process.
Of great importance in systems supporting the task allocation to agents for execution, is the
use of a task-role-based access control model enabling the control user's access in order that
only authorized agents can access information objects [200]. The agent roles represent
different specific task competencies. The authorisation process is based on grant permissions
also called access authorisations depending on the agents status that confer them a new
different role at each interaction level, and equally the importance of resources requiring
access privileges. This process takes into account the necessary discretionary and mandatory
task access control essential for the system security and the support of multiple applications
and serving multiple users.
The knowledge domain mapping process results in three models which are essential in
implementing various applications in a domain. These models, which are derived from the
Feature Oriented Domain Analysis (FODA) method [201] are:
The information model analyses and structures of the domain knowledge, and elicits data
requirements.
The specification model details both the general and specific capabilities of applications
in a domain presented in the form of web-based services.
The functional model shows the functionality and behaviour of the agents invoked by
different applications in a domain, explicating how to make use of the data entities.
The FODA method consists of an iterative process of the following steps:
112
Select features from the domain model.
Create object specifications, by:
o identifying the domain objects, including their name and description, and
o deriving objects operations, detailing input data and output information.
Create agent specifications by identifying the domain objects invoked in the gent
interaction, including and the required inputs and outputs from the Object Specification.
Create device specifications by selecting the appropriate control characteristics including
the software and hardware elements.
The case study developed to support the research study uses a symbolic location related to the
virtual layout chosen to represent the indoor sensing premises. The symbolic location is based
on absolute location systems which use a coordinating system for locating, meeting the
precision and accuracy of localisation requirements.
The indoor detection activities examined in this research is based on matching sensors using a
knowledge-based system to identify the most appropriate sensors that enable meeting the
different missions covering this domain. Existing sensor data elaborated using a Sensor
Semantic Network Ontology is available in a database, and the selected sensors are integrated
in a sensor node platform wirelessly connected to form a WSN supported by a server storing
the WSN database. This integration is supported by an intelligent support system that
determines the sensor platform specifications using the individual requirements of sensors to
113
be grouped in a sensor node. This sensor platform is designed to be a homogeneous device
representing, for example, a smart fire detector.
In this framework, the proposed ad hoc WSN also includes heterogeneous devices connected
wirelessly. This heterogeneity results from the use in combination in the integrated WSN-
RFID of a variety of devices and RFID tags that includes fire detectors, cameras, attendance
and motion sensors or microphones, to enhance the role and use of homogeneous devices.
This enhancement is supported by the integration of different device functions modelled in
intelligent agents composing the different services. Due to the devices differences, their
integration requires a high-level of data modularity and adaptability in the distributed multi-
agent system architecture.
4.6.2.1. Sensors
Sensors are defined as devices that measure physical (e.g. temperature, force,) or abstract
quantities (e.g. the number of people in a room). They measure simple physics and the
environment, and can be used for sensing and monitoring various activities, from simple
phenomena to complex events and situations. They are integrated and/or connected with a
multi-purpose device, depending of their use. When incorporated in smart homogeneous and
heterogeneous devices connected to networks, the device is considered as a sensor node using
a micro-controller that requires CPU, memory and power.
The modelling and publication of sensor data and their contexts of use consist of using a data
representation which is based on the sensor data being annotated with semantic metadata,
with the aim of increasing interoperability for sensors and sensing systems, and providing
contextual information essential for situational knowledge [203]. This representation is
supported by the sensor semantic network ontology (SSN), defining data encodings and web
services to store and access sensor-related data [204].
114
4.6.2.2. Sensor Semantic Network Ontology
The SSN ontology is a solution elaborated to describe the WSN sensors, their data and their
contexts of use. This description allows autonomous or semi-autonomous knowledge agents
associated to the deployment of these sensors to assist in deploying, configuring, collecting,
processing, reasoning about, and acting on sensors and their observations. It describes sensors
in their main characteristics, which include their capabilities, measurement processes,
observations and deployments. It provides sensors metadata vocabularies supporting the
sensors semantics and the main WSN issues, including the sensor nodes interoperability, data
communication policy in terms of sensor nodes parameterisation, use of sensors for their
enhanced context adaptation, and data communication limitation for less power consumption
[205].
The SSN ontology is of great importance given the wide variety of sensors and sensor nodes,
and also the huge volume of sensor real time data requiring its full integration by associating
to the data a description of its acquisition and communication policy. It is a support tool for
the development of intelligent agents able to modify the sensor nodes’ behaviour to enhance
their configuration and optimize their performance in terms of lifetime.
The monitoring planning process aims at assigning monitoring missions that require sensor
mission matching which deals with first identifying potential hazards that might occur in a
defined space, and then examine available capabilities that will eliminate these hazards. This
can be extended later to include their use for continuous sensing, event detection, pattern
identification, location sensing, access control and local control of actuators.
The sensing mission matching is a sensor domain feature which consists of breaking a
monitoring mission down into a collection of sensing and monitoring operations, each of
which is broken down further into a collection of distinct elementary measurement, control
and coordination tasks. Each task has specific capability requirements that enable the
accurate measurement of the feature of interest associated to the task.
115
On the other hand, the sensor mission matching is a requirement engineering process that
supports the measurement requirements and capabilities association, validating in a second
step the selection of existing sensor nodes during the individual tasks composition, or
suggesting the design of specific sensor nodes. This process is illustrated in Figure 4.8 shown
below. The individual tasks composition results in the identification of a group of capabilities
already present in existing sensor nodes, or these grouped capabilities are a candidate for the
design of new sensor nodes.
Available sensors data stored in the sensor database [205] can be used to make decisions
about which sensors are more or less appropriate for a specific mission requiring the
performance of intelligence, surveillance and reconnaissance tasks in a monitoring activity.
116
Figure 4.9: Sensor domain model
The selection of the support for the grouping of sensor capabilities depends on the mobility of
the domain entities to be sensed and/or monitored. Fixed entities require the use of smart
detectors and or monitors whereas the mobile entities can be connected to smart active tags
attached to objects or people, using the RFID technology. These tags are sensing and radio-
equipped devices and the generic sensor node resulting from the integration of RFID and
WSN is a hybrid embedded system wirelessly connected to the WSN.
Heterogeneous nodes are considered in this work to be additional nodes to the existing nodes
of the ad hoc WSN. They have homogeneous software architecture, and are different in the
sense that they have a different software architecture based on the integration of high-speed
microprocessors and high-bandwidth, and long-distance network transceivers. They enable
117
the ad hoc WSN to acquire in-network functional and data capabilities, to process and store
long term range data, and also exchange data with the main system.
118
RFID. This data is associated to the location of all mobile devices identified by attached tags.
RSSI is the measurement of the power present in a received radio signal.
The characteristics of the sensor node functional components depend on the specifications
that meet the sensing-monitoring-tracking requirements in terms of power supply, and data
communication, processing and storage, as expressed individually for each sensor and
possibly their attached RFID tags. Sensor nodes are designed as smart devices procured at
low cost and built with optimal capabilities for energy efficiency, wireless communication,
programmability, expansibility, intelligence and size reduction.
There is a clear separation between programming applications aimed at modifying the device
parameters (task deadline, period, and CPU and network bandwidth reservation) and the
sensors application tasks representing the different sensing tasks and the sensor node
configuration. Of great importance for the sensor node performance and life time, is the
incorporation in the node deployment process of two essential tasks: the resources reservation
(energy and memory) needed for the support of both programming and sensor tasks
applications, and real time scheduling for the support of task management, inter process
communication (data exchange between processes), and process synchronisation, as
illustrated in Figure 4.10 which shows also a generic active sensor node architecture.
119
Figure 4.10: Generic active sensor node functional and system architecture
Dynamic power management (DPM) is used at runtime as a major power saving technique
based on the variations in workloads, shutting down the devices when not needed and getting
them back when needed. This is based on using during the WSN configuration of three
different device states: active, sleep and idle. More importantly, the energy consumption is
monitored with a focus on selecting the processes to run depending on available power in the
device.
The DPM technique requires the use of an appropriate embedded operating system to support
the execution of internal commands implemented in the device and external users’
commands. More importantly, the use of active sensor nodes to reduce the quantity of data
transmitted across the network by organising the local processing and storage of data, reduces
the energy consumption considerably.
120
A variety of decision making support tools can be used for calculating energy consumption in
sensor nodes [209]. This evaluation is based on both the characteristics of the sensor nodes
functional components, and on a number of user-defined initialization parameters used in
organising the deployment and configuration of the device. The monitoring of the WSN
power management is an essential component of the HIDSS supporting the proposed research
conceptual framework.
The energy reservation in the sensor node concerns the CPU, the sensors and the network.
121
4.6.4.3. The Sensing and/or Actuating Unit
In homogeneous sensor nodes, the sensing unit is composed of a collection of sensors which
produce the electric signal by sensing physical environment. The signal is transformed by the
Analog to Digital Converter (ADC) which is integrated in the microcontroller. In these
devices, the incorporation of several sensors in the same sensor node poses the problem of
their individual sensing limitations in terms of distance that restricts the device coverage to
the minimal of the different sensing distances.
The sensor node design quality can be improved during the WSN deployment and
configuration using a knowledge-based approach for enhancing the collaboration among
sensors nodes within clusters to better share resources and reduce sensor nodes workloads,
and support neighbourhood sensor nodes failures [212]. In this context, the configuration of
sensor nodes may be influenced and modified by inferences and knowledge of
neighbourhood sensor nodes in order to obtain a more accurate and reliable behaviour. The
sensors nodes are supported by a ruled based system sharing between them variables, data
and knowledge when collaborating in achieving a global objective by fusing local and remote
information.
122
4.7.1. Domain Context and situations Analysis
The sensor nodes selection is based on the sensing and monitoring specifications of the
domain contexts and situations which are derived from the different requirements linked to
the location and use of sensors in the domain context.
The domain context considered as a case study in this work to support the conceptual
framework implementation is sensing and monitoring in public attended places for detecting
a list of events. These places have the buildings configuration, composed of rooms and halls,
where people gather for different purposes.
Full sensing and/or monitoring of these places in terms of devices working or not, people
and object locations, adequate signal coverage of sensed areas, and full time connectivity.
Wall sensing obstruction.
Integrated sensing and/or monitoring.
Event alerting.
Fault finding.
Coordinated effort in emergency response.
123
to search the optimisation of the detection coverage using different methods of locating the
fire detectors and also the required dispensers. The resulting specifications are:
This multifunction sensor is a measurement device for control systems with LED
indications. It is ideal for applications that require ventilation on demand for public
attended places such as Schools, Libraries, Universities, Offices, Sport Halls, Theatre
and Leisure centres.
The sensor should be located:
o Away from any windows, doors or any sources of direct air flow in the measured
space.
o Away from any heat sources.
o Away from any areas with poor air circulation such as dead spots behind doors, for
example.
Sensor Outputs:
o CO2: 0-10 V for 0-2000 ppm
124
o Temperature: 0-10 V for 0-50° C
o Humidity: 0-10 V for 0-100%
o Sensing range: 0-5 m
Sensing range: 0-5 m
Accuracy @ 25’ C and 50% RH:
o CO2 +/- 40 ppm +2% of reading
o Temperature (voltage) +/- 0.5° C
o Temperature (resistive) +/- 0.5° C
o Humidity (option) +/- 3% RH
Power Supply: 24 V AC/DC (+/- 15%)
Power Consumption: 100 mA
Operating Conditions: 0-50° C 10 to 80% RH non-condensing
Warm up time: 2 Minutes (operational) 10 Minutes (peak accuracy)
3 x LED indication (optional)
o Green – on below 1000 ppm
o Yellow – on at 1000 to 1500 ppm
o Red – on above 1500 ppm
Terminals: Max cable size 1.0 mm
Dimensions: 80 mm x 80 mm x 29 mm
The selection process can be refined to review the domain contexts specifications when the
search returns a NULL value. This process is supported by a decision making process
involving different techniques needed to compare and rank plausible alternatives returned by
the query.
125
4.7.3. The WSN Deployment
The WSN deployment consists of connecting and deploying the different sensor nodes
identified the best to meet the sensing and monitoring mission requirements within the space
configuration to be considered. This operation is preceded by the device planning which
consists of finding the number of devices needed for the full coverage of the sensing and
monitoring area. Location algorithms based on centrality, and rules of thumb are used to
locate the different homogenous and heterogeneous devices.
In this prior planning, the node location is a unique point calculated with a localized
algorithm based on the use of a specific knowledge procedure depending on the type of
sensor node needed to perform the required task, such as, for example:
Sprinkling.
The exact node location, as accurately determined by the appropriate node localization
algorithm detailed in the following sections, enables the improvement of the data fusion
performance [213]. Data fusion plays a key role when designing an integrated solution based
on the use of ad hoc WSN due to the data multi-sensor fusion integrating the collection of
different types of data from different sources.
126
a fire detector device is determined by the spacing distance between two nodes distributed
following a geometric pattern that satisfies the following two conditions:
The generic nature of this conceptual framework would suggest unifying the concept of
sensor node under the appellation of device sensor node that will regroup both homogeneous
and heterogeneous devices.
127
4.7.4. Ad hoc WSN Configuration
128
The first two categories relate to descriptive data illustrating the sensor semantic network
ontology’s for sensors and sensor nodes, the third category is composed of the network
configuration data and the sensing data. Each category of data is supported by a specific
domain data model resulting from the sensing knowledge domain analysis.
4.9. Summary
The generic conceptual design framework is a methodological support that guides novice
users to sensors and sensor nodes knowledge discovery by providing them with generic
knowledge processes decomposed into elementary knowledge tasks. These tasks provide a
flexible composition structure to build re-usable blocks, or agents software, using the task
description and manipulation language of integrated knowledge domain of sensing and WSN-
RFID domains. The case study used in this research and presented in the research evaluation,
is an example of complex problem solving involving real time decisions based on real time
heterogeneous data, with a high level of individual and group continuous interaction of every
heterogeneous entity composing the organisation environment and collaborating to preserve
the system control.
129
Chapter 5: System Developments
5.1. Introduction
The definition of system development as presented in this chapter does not refer to the
tradition of six phases of system analysis and design process [199]. It corresponds only to its
fourth step "Develop the System" that includes the three following stages: develop, test and
evaluate the software. This step is preceded by the requirement analysis and the architecture
design, and described in the previous chapter. The system is defined in this work as the
system prototype.
Hybrid Intelligent Decision Support System (HIDSS) is the software proposed in this
research to support sensing-monitoring knowledge domain analysis, sensor node design,
integrated WSN-RFID configuration and deployment, and context aware real-time data
capture and sensor and WSN data fusion.
An indoor sensing and monitoring case study has been proposed in this research to apply the
generic system conceptual design framework presented in the previous chapter, to support
information scanning and emerging issues for a wide range of applications including
emergency planning for fire detection, control access and other related issues.
130
concepts, discovering new functional requirements and understanding the impact of non-
functional requirements, and testing the different system functions segmented into partitions.
This approach enables the modifications easier to make as the iterations progress, and the
different modules are logically segmented.
The discovery of new functional requirements results from the sensor and WSN data fusion
process which describes the different steps of integration of sensors and WSN data and
knowledge from several sources.
Data Input,
Data processing,
Data level fusion,
Signal classification and
State classification.
The sensor and WSN data fusion functional diagram is shown below in figure 5.1.
131
This data fusion process which is illustrated in the data management architecture shown in
Figure 3.4, is part of a context aware information fusion that includes support capabilities for
on-line transactional and analytical processing, based on a panoply of theory, techniques and
tools for exploiting the synergy in the information acquired from multiple sources and
produce knowledge supported by fusion functions. The integrated WSN-RFID plays a
preponderant role in enhancing sensing, monitoring and tracking, observing and analysing the
environment to instantly identify situations that require taking immediate actions. This results
in updating knowledge and models stored in databases, about previous behaviour, simulations
predicting future behaviour.
The case study presented in this work consists of the design of an integrated WSN-RFID
based on the use of a virtual building layout to support:
132
Eliminating or reducing a data bottleneck when sending and receiving in the network,
Implementing an effective sensor node reconfiguration policy to reduce the limitation
of communication bandwidth,
Eliminating redundancy among data values from neighbouring sensors, and
Reducing energy consumption by implementing in-networking and reducing the large
amount of data communicated within the network.
Sensors,
RFID Tags,
Homogeneous devices (Sensing, monitoring and tracking devices),
Heterogeneous devices that enhance the deployment of the homogeneous devices, and
Gateways and routers.
Of great importance in the support of the data and information fusion is the device
assessment functionality supported by the data fusion model which includes several fusion
levels, illustrating the sensor network signal processing tasks, and corresponding to the
following functions:
Signal or feature assessment involving data extraction, analysis and event detection,
Entity assessment that includes the parametric and attributive states of devices during
their configuration and deployment,
Situation assessment regarding the nature of influence of between the different
devices,
Impact assessment concerning predicted impacts on other devices, and
Performance assessment in terms of measures of effectiveness.
This device support, which is organised in a distributive architecture system, integrates data
fusion and mining to produce optimal decision fusion by considering a set of finite of
133
decision alternatives at each data fusion level. This process integrates low level decisions,
and can be decomposed into two functional components:
Real time detection of known or expected patterns as the heart of the information
fusion process to filter known patterns, and
Off-line discovery of new patterns supported by data analytics and aggregation as the
data mining process.
5.2.6. Link between the HIDSS architecture and the case study
The framework proposed in this research illustrated in the HIDSS system architecture shown
in Section 4.3.2.1, establishes the conceptual link between:
a) Data, Information and knowledge using the Data Mining, and the Knowledge Discovery
and Extraction, and
These two conceptual links are reflected in the case study proposed in this research, at the
following functional levels:
134
Allocate heterogeneous to enhance the detection devices functionality (a),
describing the data interaction between the different entities shown in Figure 5.2. This task
includes the preparation of the data support needed for the real time data input for the sensor
and WSN data fusion, as the first phase of the sensor and WSN information fusion process;
the second phase being the information discovery supported by analytics of data mining.
135
Figure 5.2: Functional processes based on the entity model
The structuration of the functional units is based on the analysis of data and knowledge
structures, and agents that will be used to organise the next steps of the sensor and WSN data
fusion process, especially when supporting sensor network signal processing tasks as a part of
the devices support.
The implementation of the functional units listed above, is preceded by the design of the
WSN and its deployment using the functional data model showing the different data entities.
This model is essential to support the sensor and WSN data input, a necessary step performed
prior to the data fusion process.
(*) This module corresponding to the WSN energy plan management is a planning process
aimed at maximizing WSNs lifetime, and ensuring its continuous working has not been
implemented.
The sensors and RFID tags databases provide the general characteristics description and
functional specifications of every sensor or RFID tag which might be inserted/incorporated in
device sensor nodes. These form the homogeneous devices of the ad hoc WSN.
136
Heterogeneous devices are described by a data collection of a list of devices used to enhance
the deployment and actions of homogeneous devices, and the sensing, monitoring and
tracking inside and outside a building. These devices include:
People counters,
IP cameras,
Sign displayers,
Window and door opening-closing control systems,
RFID readers, sprinklers and
Warning signs.
The gateway-router database provides the general characteristics and functional
specifications of both gateways and routers used to wirelessly connect homogeneous and
heterogeneous devices to WSNs. This database may include a table of the available routes
and their conditions, updated from the network latest state information available. This
information, used along with distance, signal strength and prevailing traffic, is essential to
determine the fastest route for data communication.
The support databases are centralized and located on a unique front-end server. They are
updated when incorporating new technical information resulting from their development
improvement or performance assessment or new elaborated devices. A database update
component is provided in the developed system to support these operations, enabling the
basic manipulations of a database management system. The database data models and
interface are shown in Figure 5.3.
137
Figure 5.3: Databases data models and in interfaces
The sensor node design is a key component of the generic design conceptual framework. It
aims at ensuring that the sensing, monitoring and tracking requirements are thoroughly
examined and the specifications of a device sensor node that fully meet these requirements
are dynamically elaborated, performing an automatic update when these requirements have
changed.
In this work, the sensor node design consists of grouping a set of sensors, the number being
limited in this case study to six sensors per sensor node with the option of incorporating a
RFID tag or reader, depending on the sensing, monitoring and tracking requirements.
138
Of great importance in the sensor node design, is the calculation of the device specifications
(power consumption, storage and processing) elaborated should this device become fully
active to perform local operations required in network processing and data storage. This
calculation can be performed with different scenarios of sensor and sensor node
configuration, depending on the setting of each sensor and device sensor node in the ad hoc
WSN.
There are a wide variety of calculation models for evaluating the power consumption of
wireless sensor network applications [214]. These models are based on the use of different
strategies to help predicting the WSN lifetime depending on the applications specifications
for the use of the WSN, and optimize the required energy consumed using an effective energy
plan.
The integrated functional unit aimed at the design of the generic sensor node is illustrated by
the interface screenshot shown in Figure 5.4.
It is essential that the designed sensor node device meets the following quality requirements:
139
5.3.2.3. The Sensor Node Design Implementation
The generic sensor node design process is based on the use of data described in the device
sensor design data model illustrated in Figure 5.4.
The content of this model can be context application specific in the sense that it can be
updated depending on the general characteristics description and functional specifications of
the needed sensors or RFID tags composing the desired device sensor node. This updating is
supported by the system database management component which provides the flexibility to
support both the definition and manipulation of data operations. Additional data functions
need to be developed to support the data warehousing. An exhaustive list of sensor properties
is published in technical documents.
140
devices. This data is structured using a language task description of the activity involving the
tags correspondents. This language is required to implement the sensor node architecture:
composition plans are converted into assembly commands which when performed result in
perceptual data reflecting the status of each component of the node.
The composed devices are wirelessly connected to the WSN, interacting with the RFID
readers installed in each room of the building, as explained later, in the section about the
heterogeneous devices allocation.
Mobile sensor node devices will be linked to a table of people or goods to highlight the
concept of device attachment to these two entities which can be grouped in the same database
using an entity code to dissociate the two different types.
The virtual building layout is an essential requirement for the support of:
the localisation of the device sensor nodes and their spatial coverage,
141
the localisation and tracking of people,
the spatial measurements required by the different models used by the WSN
applications, and
the representation of the building features such as doors and windows, and energy
supply points for the electric sockets and gas supply, needed for the determination of
the sensing and control requirements.
The elaboration of the virtual building layout is based on the use of an encoding system to
draw a 2D representation of the building walls in the form of lines represented in the data
model shown in Figure 5.6. The layout is generated from a building wall segments using the
8 direction encoding system. An example of virtual drawing layout is shown in Figure 5.7.
The encoding system used to draw this virtual drawing layout is an 8 or 16 directional coding
procedure depending on the layout shape accuracy required. Both systems use the same
principle illustrated in Figure 5.7. This principle is based on the use of a layout starting point
(main external door for example) and use a set of segments of walls to compose the external
and internal walls of the building, operating room per room. Each wall segment is composed
of:
142
a directional code called "Direction" in the data model, and represented by a decimal
(0-7) character for the 8 directional coding or an hexadecimal character (0-9,A-H) for
the 16 directional coding[155], indicating the drawing direction:
o 0 : North-West,
o 1 : North,
o 2 : North-East,
o 3 : East,
o 4 : South East,
o 5 : South,
o 6 : South West, and
o 7 : West)
The length of the wall partition measured in metres, and
Optionally:
o A energy supply point, and
o A room adjacent.
The building layout configuration consists of encoding the wall portions and openings,
starting from the main door entrance as illustrated by a dot and the sign <. The encoding
principles is based on a starting point SP(xsp,ysp) corresponding to one extremity of the wall
portion listed above. The coordinates of the wall partition (or wall segment) ending point
EP(xep,yep) are calculated using the segment length Sl and the direction code to indicate the
angle value to use in equations 5.2.
where:
α is the angle,
143
supply use points are indicated by G (Gas) and E (Electricity) in the "ESUP" code of the
corresponding segment.
Room adjacency is indicated in the "Next room" code of the corresponding starting segment
of the adjacent room.
Figure 5.7: Example of virtual building layout, encoding systems and data
Of great importance in the virtual building layout data is the identification and localisation of
building patterns that are required to help deciding the allocation of heterogeneous devices
needed to enhance the role of the homogeneous devices in real time alerting and prompt
emergency response. These patterns are essential for the configuration of both homogeneous
and heterogeneous devices, and their effective wireless connection to the WSN.
Of similar importance is the signal coverage of routers and gateways wirelessly connecting
these homogeneous and heterogeneous devices to communicate their data to the server and
front-end system. These two major requirements are examined in the next two sections:
homogeneous and heterogeneous devices allocation, and WSN configuration and generation.
144
5.3.4. Devices Allocation
The full optimal coverage is a prime requirement to ensure that working devices, properly
monitored and tested, fully support sensing and monitoring effectively everywhere and at all
times within the premises.
The device sensor node allocation and configuration is an incremental iterative requirement
engineering process composed of the following steps performed independently for the first
two steps:
145
Figure 5.8: Heterogeneous devices allocation plan
During the allocation process, the first type of each device is arbitrarily selected to generate
the required heterogeneous devices. The device type selection will be refined taking into
account the different rooms specific requirements and the homogeneous devices
configuration.
Refinement of the selection process can be performed after the identification of the required
heterogeneous devices, and integrate additional device configuration requirements. The data
model is shown in Figure 5.9.
146
Figure 5.9: Heterogeneous devices data model
147
b) The Model Implementation
The sensor node allocation model can be performed with and without taking into account the
presence of the internal walls inside the building. The homogeneous device allocation process
starts from left to right and from top to bottom. The automatic generated location of the
sensor node devices can be altered manually to accommodate any device placement
constraint and/or other requirements, such as, security, visibility and others.
The sensor node device location principle is similar for both geometric pattern shapes. It
consists of locating the first device in the first top-left position on the room ceiling (900
vertical translation to produce the 2D building layout shown in Figure 5.7) using the spacing
distance of the selected device. The accurate spacing distance between 2 sensor node devices
changes from 2D to 3D by incorporating the height of the room to ensure a full coverage of
the 3D space as illustrated in Figure 5.11. The real spacing distance is given by the formula
[5.3].
148
The device location is the circle centre which is fully contained inside in the top left of the
ceiling room layout. Then the neighbouring devices location is determined by coordinates
increments as illustrated in the coordinates calculation formulas listed below where:
149
Yfd =Yltrc + SD x Sin (45o) for first row [5.10]
150
The device type selection will be refined after the devices location generation, taking into
account the different rooms specific requirements associated with their usage. This
refinement process can be performed after the identification of the required homogeneous
devices.
The sensor node allocation model can be performed with or without taking into account the
presence of internal walls in the building. The allocation process starts from left to right and
from top to bottom. The automatic generated location of the sensor node devices can be
altered manually to accommodate any device placement constraint. The generated sensor
node locations for both models are illustrated in Figure 5.13.
The geometrical location distribution pattern is considered in this case study as the basis for
the sensor nodes location generation. The consideration of both patterns, hexagonal and
square, with different settings (localization per room or building, spacing distances, different
clusters, etc) procures an extensive support for a multi-criteria sensor node localization
decision making approach. The concept of single and double sensing zones for homogenous
sensor nodes is used to evaluate the sensing overlapping shown shaded in Figure 5.10. This
concept is an aspect important for the implementation of the safety factor used to increase the
sensing and detection reliability.
151
5.3.5. Device Selection Refinement
The device properties: accuracy, coverage, energy supply, and safety factor,
The frequency of environmental updates, and
The devices and WSN testing and maintenance.
Additional device and WSN requirements inherent to several policies for their configuration
and deployment are integrated at a later phase. This refinement shown in Figure 5.14, is a
manual process which will be fully automated in a second phase when the sensor and WSN
information fusion is fully implemented, enabling to automatically select the appropriate
device selection for a specific sensing or monitoring or tracking context situation.
The manual selection refinement procedure is based on selecting a device from the database
(sensor node devices or heterogeneous devices) and change it as the desired device selected
from the list of allocated devices..
152
This procedure results in heterogeneous data capture when selecting different sensor node
devices composed of different sensors architecture and configurations.
The sensor node devices configuration includes their programming to enable them to perform
a variety of operations described below and which include the definition and control of
clusters of nodes. This programming, which is not included in this case study, aims at
implementing high level operations using the concept of protothread corresponding to
external macros written in the same development language and integrated in the main support
system. However, their configuration and deployment in the WSN is fully functional.
153
5.3.6.2. Sensor Node Battery Replacement Schedule
Depending on the sensors battery life time, which can be in the time range of few hours to
two decades, it is important to consider the monitoring of the sensor batteries energy
consumption and plan their replacement when their threshold is reached, or battery failures
are detected. The network specifications require the definition of the following battery
characteristics which might differ from one battery to another, mainly when different life
time batteries are used in the network nodes:
Date of installation/replacement,
Power consumption,
Time of use, and
Expected date of replacement.
154
The sensor node pre-configured reading interval specification requires the definition of the
following sensors characteristics:
The reading time is an important event processing specification as the data processing time
might differ should delays occur during the data communication due to the differences in
network link distribution [216]. Such differences can alter the real sequence of application
context events as they have been expressed by the sensing data.
155
Figure 5.16: Sensing spacing distance segmentation to form sensor nodes layers
156
role of sensor nodes in their corresponding clusters. The device status for sensor nodes and
sensors is specified depending on their corresponding sensing requirements and safety factor.
Their configuration is supported by the process model shown in Figure 5.18 which includes
the following main rules:
All time active sensors nodes are those localised, either at the centre of a cluster of
sensor nodes or nearest to the energy supply use points, doors and windows, whereas
duty cycled manner sensors nodes concern the remaining ones.
Active sensors nodes are those localised at the centre of a cluster of sensor nodes with
additional functionality to perform the network functions at the sensor node and also
data processing and storage.
Figure 5.18: Senor Node and sensor status process model setting
The sensor node configuration is a continuous and iterative process as its relative position in
the WSN could require its re-configuration, depending on several conditions:
WSN performance,
Sensor node performance and status, and
Event detected in the neighbourhood
This process model supports the implementation of sensor node status configuration policy
which is based on a selective setting, sort of a configuration filter. The main options are:
157
Energy supply use point,
Door and Window located,
Selected room,
All rooms,
Selected sensor node, and
Room usage.
A SNC is required to have a sensor node cluster head (snCH) that must be supplemented by a
sensor node cluster head substitute (snCHS) in case of the snCH malfunctioning.
The network needs to preserve energy by assigning, taking into account the sensing
requirements at each sensor node location, the sleeping mode to sensor nodes
enabling them to switch between active and sleep modes depending on the network
activity to conserve energy.
The gulf existing between sensing spacing distances (SSD) of sensors that are needed
at the same sensing location, which might result in grouping sensors of the same SSD
in different types of sensor nodes.
The prevailing conditions while using the network and the probable detected events
require migration, which might suggest the network reconfiguration.
158
These three structural characteristics can be modelled around the concept of a virtual cluster
defined at a logical level whereas the initial clustering made of the aggregated SNCs
corresponds to the physical level, which is associated to one, or several logical levels, as
illustrated in Figure 5.19.
159
The virtual cluster defined as a logical level, enables flexibility in the WSN deployment to
support more effectively in a variety of context applications. Examples of cluster logical
levels are:
All the sensor nodes involved in an emergency response configured as one sensor
node cluster with more reliable connection specifications (active connection, high
performance routers, high band signal) whereas the other sensor nodes of the WSN
will be configured separately in other clusters.
All the non switchable sensor nodes are configured as one sensor node cluster
whereas the other sensor nodes of the WSN will be configured separately in other
clusters.
Sensor nodes clustering policies are an essential tool for the WSN configuration. This tool is
supported in the HIDSS by a process model elaborated from the information fusion resulting
from the configuration context knowledge domain.
160
5.3.8.2. Cluster Aggregation Level
In this research, this homogenisation is performed by reducing the size difference between
the most and less populated sensor node clusters of the network. This size difference called
here an aggregate level is comprised between 0 and 3. The aggregation of the sensor node
clusters is shown in Figure 5.20.
One main characteristic of WSNs is the heterogeneity of its device capabilities to perform a
variety of configuration, sensing, computation and communication tasks by active sensor
nodes which can also be allocated to control and assist a group of passive sensor nodes.
Higher capabilities when required depending on the requirements of the application can result
in the creation of a base node to integrate a variety of devices other than the sensor nodes to
provide the needed capabilities. This can have an impact on the WSN architecture requiring
the use of a multitier cluster based architecture. An example of a network topology advocated
for use in this study is illustrated in Figure 5.21.
161
5.3.9.1. Node Type
The WSN generation requires the definition of its different elements, and their network
architecture. These elements are:
Coordinator node (CN or Gateway: G): one per WSN, its main function is to initiate
the network formation by configuring the Channels, nodes ID and Profile).
Router node (RN or Router: R): is an optional network component involved in
routing network messages by maintaining a routing table and managing local address
allocation and de-allocation for its assigned end device nodes.
End device node (EDN): is also an optional network component used in low power
operation optimization by taking advantage of sniff and sleep techniques.
EDN CN
EDN RN CN
EDN RN RNCN
RN CN
162
5.3.9.4. The WSN Mesh Generation
The WSN generation consists of wirelessly connecting all homogeneous and heterogeneous
devices required to support the context applications. These connections require the choice of:
Network characteristics: Node type, Node connection type, and Network typology.
Device connection policy based on a strategy to attach a sensor node to a node type
using the following options:
o Sequential or alternative sequential attachment of devices to node type,
o Sensor nodes of a room per node type or alternative room sequential,
o Spacing distance per node type,
o Separating sensor nodes from heterogeneous devices,
o Cluster of sensor nodes per node type,
The device connection policy procures device attachment flexibility, enabling the
composition of options mentioned above. For example, sequential device attachment can be
composed with a separation between homogeneous and heterogeneous devices, as illustrated
in the following WSN generation shown in Figure 5.22.
163
Of great importance in the WSN generation process is the safety factor required to reserve
empty slots in the different node types that will be needed to accommodate clusters of sensor
nodes that require reconfiguration due to a low bandwidth signal.
A population of people wearing a wrist band, and a collection of objects with attached tags,
are identified and localised in this application by a sensor node composed with a RFID
identifier and sensors needed to control the environment context of these people and objects.
The corresponding data model is shown in Figure 5.23.
In this application, every person is allocated with a set of access privileges to allow/restrict
room access in accordance with the room capacity based on an occupancy model defining the
number of people allowed in each room. Objects can be incorporated in any room, and
collision detection between people and objects is implemented for the definition of
164
evacuation paths, suggesting contouring them. The case study includes an application that
enables the tracking of people in the building, the storing of the different rooms accesses, and
the identification of people located in each room. This implementation is illustrated in the
interface screen shown in Figure 5.24.
The use of colours in the above screenshot enhances the system user interaction indicating the
movement choices for the selected person: moving in the same room or to room 1 as
indicated in the rooms frame in green.
165
Visualisation of the WSN entities,
A navigation path of the building with decision information control,
The homogenous data capture,
A sensor node cluster alarm reaction model, and
The display of evacuation paths.
The visualisation of the WSN entities and their status are displayed as the result of the data
information fusion process, involving the activation of intelligent agents interacting with each
other, when their corresponding services are automatically invoked during the deployment of
the WSN and the execution of context applications. Supporting the monitoring of the domain
activity, intelligent agents composing the invoked services are activated automatically when
event conditions are identified and an automated instantaneous reaction is performed. An
example of monitoring of the WSN architecture in the virtual control room is illustrated in
Figure 5.25.
166
5.5.2. Building Navigation Path
Building a navigation path, which is a path finding a way between the different rooms of a
building, can be used to support:
The wall segment of the rooms, particularly the door and windows segments for the
identification of exits inside and outside the building, and
The evacuation model for the calculation of the theoretical time, using in combination
the room occupancy model and the evacuation rate.
The evacuation theoretical time can be improved by incorporating the position of each person
in the building and the simulated time required for them to leave the building. This is
illustrated in the decision tool shown in the next section.
Enhanced messaging devices can be incorporated in their tags along with sensors, and
elaborated information also stored to personalise the emergency evacuation.
167
Figure 5.26: Building navigation path
The evacuation path finding takes into account the presence of objects in the path, and
collision detection and path re-planning is performed when required. However, the
evacuation time has not integrated the waiting time at the collision nodes which are the
evacuation points (doors or windows) between the different rooms. In the evacuation data
displayed in Figure 5.27, outside the building is referred as Room 0.
168
5.5.4 Sensor Node Cluster Alarm Reaction Model
A sensor node cluster alarm reaction model has been developed to demonstrate the
coordination of the different detectors forming the cluster in showing the progression of the
alarm along the fire propagation.
This model is based on calculating the fire proximity to the detector location which sets its
alarm on when this distance becomes equal or smaller than the sensing distance. As the fire
propagates, the individual distance to each detector in the cluster and to other surrounding
detectors increases by integrating the effect of the two following parameters:
169
TLTA: Time left to alarm.
The data captured by the WSN from the detectors is detailed in the next section.
The data captured by the WSN, which is heterogeneous as it corresponds to the output of
different homogeneous and heterogeneous devices which may have different configurations,
is shown in Figure 5.29. In the case study supporting the implementation of the system
prototype developed in this research, a data reading from a device is composed of nine values
composed of:
Device identification
Six values allocated to sensors data
Two values allocated to RFID data (x and y coordinates).
170
5.6 Summary
The implementation of the system prototype has been limited to the first two steps of the
sensors and WSN data fusion involving data capture and processing. Although subsequent
steps concerned by the support design for active nodes performing in-networking activities,
and information fusion involving intelligent agents can take place only after the configuration
and deployment WSN in real world, the case study has demonstrated the practicality of the
conceptual generic design framework for hybrid intelligent decision support systems,
illustrating the integration of the three major management components for data, knowledge
and models. The integration of WSN and RFID is also illustrated in the design support and
use of hybrid smart devices grouping WSN and RFID capabilities that enable functional
integration of knowledge processes.
171
Chapter 6: System Evaluation
6.1 Introduction
This chapter is firstly a theoretical complement to the phase system design and development
phase, made necessary by the specific complex implementation of hybrid intelligent decision
systems which support distinctively in the study of knowledge domains the processes of data
capture and the generic development of domain context applications.
The second part of this chapter is devoted to the discussion of the models used in the case
study processed by the prototype system. The evaluation objective is to insure that the system
meets its requirements from the end-users point of view, and the focus will not be only on the
implementation, but also on the system prototype which can be evaluated. The system
integration and testing aim at verifying the practicability of the design conceptual framework
and the solution functionality.
Knowledge discovery is based on the use of a combination of sequential pattern mining and
associated rules discovery techniques supported by knowledge functions and processes to
elaborate new meta-knowledge and rules.
172
6.2.2 Meta-Knowledge
The meta-knowledge is required to guide future planning or execution or subsequent phases
of the system in the real world, using a preference framework for situations in which the
realisation of goals associated to the intelligent agents of the distributed multi-agent system is
essential.
This framework is based primarily on behavioural types that describe the system agents’
observable characteristics to be used for the evaluation of their different interactions with the
help of interdependence models. These models are proposed in this work to support the
system evaluation by end-users, by describing the system agents’ intentions and ranking their
influences.
Of great importance in the description of agents characteristics or attributes is the fact that an
intelligent agent has only partial information for its own influence valuation, depending on
others for their influences. The agent influences are also marginally dependent but
conditionally independent. These differences in agent influence dependence structure have
significant implications for the effective understanding of end users on how agents should
behave for meeting their performance goals when using their own given properties and
reflecting the other agents’ properties.
173
6.3 The Evaluation Methodology
Of great importance in evaluating the implemented system, is the relevance of domain
knowledge provided by domain experts, mainly when this knowledge is not predetermined.
The objects of a domain knowledge are described by ontologies contained in a knowledge
base system. Problem solving methods specify how knowledge tasks are decomposed and
structured in terms of goal hierarchies composed of collections of smaller goals associated to
basic knowledge tasks composing the knowledge domain processes. It is thus of interest to
ensure the coherence between domain knowledge description and the primitive knowledge
tasks, in an effort to acquire knowledge from end-users, and also to clearly specify for each
problem solving method both available and missing knowledge and what can be achieved by
the knowledge domain processes invoked method capabilities.
These elements aim at ensuring that end-users using the knowledge acquisition tools are able
to modify the problem solving knowledge of a knowledge domain system. More importantly,
the required meta-knowledge is composed of information about the system prototype, mainly
the efficiency and reliability of its components.
174
6.3.1.2 The Users' Problem Solving Knowledge
This step is essential in the implementation process of a system for the support and use of
interdependency tests to acquire problem solving knowledge from the system end-users. It
allows the system end-users to practice the knowledge and model capabilities of the hybrid
integrated decision system. This practice is based on the use of a variety of examples and
interdependency models to enter knowledge by relating to each other individual components
of the knowledge base in order to incorporate and support their additional expectations.
Direct authoring of the knowledge domain components interaction, used as trial-and-error
testing when using knowledge acquisition tools, is becoming increasingly needed when
examples are not readily available, to use demonstrated examples to create interaction rules.
This step constitutes a major challenge in the study and analysis of a knowledge domain, as
an important part of the knowledge ongoing acquisition process, to enable the interaction
increase between end users and knowledge engineers. This interaction procures a necessary
and valuable end users feedback, enhancing the understanding by users of the knowledge
base elements and the knowledge acquisition process, and extending the development of
knowledge tools.
In this step, end users interdependency models are built around a varied number of user cases
to examine knowledge interaction rules, are related to individual system components of both
the knowledge and model base. This enables end users to be guided in specifying first their
problem-solving knowledge, then addressing several questions and hypotheses for the
acquisition of problem-solving knowledge. The required knowledge consensus, for example,
in the sensor and WSN configuration task resides in the fact that there is an interdependency
between the problem solving knowledge for finding fixes for violated configuration
constraints and the definition of constraints and their possible fixes.
Of great interest in the use of interdependency models is the support provided to end-users to
model the system agents’ intentions represented by goals, by assessing how they affect other
agent behaviours in terms of direct goals or reciprocal influence. These causal models are
proposed for the illustration of the agents distinguishing influence to support:
175
6.3.1.3 The Logical and Physical Architectures
The system’s logical and physical architectures are additional sources for the identification of
specific areas of system implementation and network deployment concerns, mainly when
considering acquiring the full understanding and control of the system architecture and trying
to locate any performance bottleneck. These concerns can be accentuated by the typology of
the network supporting the real time data capture and processing as a multi-sensor data
fusion, supporting both soft and hard data, resulting from the design and composition of
homogeneous and heterogeneous sensor nodes, their configuration and deployment, and the
data communication.
The importance of the system architecture during the ongoing evaluation process resides in
knowing, for each knowledge function, which components of the Three-tier or Multi-Tier
architecture communicate with one another and how the different involved devices support
this communication, and also which third party services are invoked and the capabilities
provided.
The logical architecture is structured around distributed applications implemented in the form
of web services, and supported by the physical architecture which can impede the effective
support of knowledge functions and processes. It is composed of the database, the web server
and the browser. However, it can be based on a single machine playing the role of host for
several applications, or include clusters of machines using the same applications. Its internal
architecture is based on a multi-level interaction between the different intelligent agents of
the distributed multi-agent system. The evaluation of this interaction is complex and requires
the use of interdependency models.
176
Cyclical interdependence establishes a testing dependency between two or more
knowledge functions and processes, and can be reciprocal or mutual, conveying a
complex testing sequence.
The integrated ad hoc WSN-RFID is aimed at hosting several context applications with
different characteristics supported by HIDSS that integrates the different management
support functions.
177
The data management evaluation has focussed on checking the following:
The hierarchy of data via the use of a master source database mechanism and data
indexes,
The logical link between the different tables of the database using data indexes, and
The data update supported by the use of the database grid to enter the data and
modifications, depending on the different operations of the database navigator.
On-line transactional and analytical processing illustrated in Figure 3.4 and Figure 5.1 has
been evaluated as follows:
Transactional processing in devices allocation, and
Analytical processing in analysing devices states of change to establish their proper
functioning.
Of great interest in the data exchange is the choice of communication protocol used to
connect client applications to the application server. Each protocol has different advantages,
and its choice depends on:
The communication protocols which have different specifications for exchanging structured
information in the implementation of service web-based applications are:
178
Web connection.
The client application is created after the database server application has been created and
run. By specifying the client at design time, it has been possible to connect to the application
server for the client connection testing. However, it would have been possible to create a
client without specifying the application server at the design time, and only supply the server
name at runtime. The drawback for doing so prevents it being possible to see if the client
database server application works as expected when coded at the design time.
Alternatively, the data exchange instead of using a web connection, can use a SOAP
connection or a socket connection. However, it has not been possible to use the same
connection for multiple remote data modules. A separate connection component for every
remote data module on the application server is required, and each connection component
represents the connection to a single remote data module. This connection component cannot
be a SOAP connection.
After creating the Web Module with the dispatcher, invoker, and publisher, the Server
Application dialog displays a message box that asks if they want to define a new interface for
the client SOAP module. The client application must know the application server’s interface
declaration at compilation time, WSDL documents describe the interfaces called by the
database server application. These interfaces are different from the SOAP data module
interface. The implementation of the web module deployment is supported by a connection to
the WSN server using a client socket for the connection of the emergency operator via their
web address. The client receives a buffer of data made available to the client.
179
Although the data exchanged with emergency operators is not yet structured and formatted,
data contained in a database memo-field representing a data field chosen randomly from a
table of the database, has been exchanged using a client connection.
Of great importance in the model testing is the system validation which is concerned with
building the right system whereas the model verification is an implementation process that
helps with building models and a fully functional system. Therefore, the requirements
themselves are scrutinised for consistency and completeness, making the model validation
and testing a reciprocal activity.
They are used to understand, specify and elaborate policies in several domains of use and
control systems. Their testing aims at measuring the overall functional compliance of the
system to the specification by building the generation of:
Model traces which are comprised of input and expected output, and
Test cases which contain the various options for an implementation.
These are derived from the solution and system requirements, and must be sufficiently
precise to be used as a basis for the generation of significant and meaningful test cases.
180
Although it is possible to automatically generate these model-based test suites, this evaluation
is based only on manual test suites for the detection of:
Failure detection,
Programming errors, and
Observable differences between the intended and actual behaviours of the system.
The device allocation model, which consists of equally spacing sensor node devices from
each other, is based on a dynamic sensor node allocation algorithm for full sensing and
tracking coverage. Two types of algorithms have been implemented. These are:
The hexagon model based on the central place theory, i-e each sensor node
corresponds to the centre or vertex of an hexagon, and
The square model.
These two models are based on the most effective geometric patterns to cover areas with a
minimum sensing overlap. The geometrical location distribution pattern is considered in this
framework as the basis for the sensor nodes localization. The consideration of both patterns,
hexagonal and square, with different settings (localization per room or building, spacing
distances, different clusters) procures an extensive support for a multi-criteria sensor nodes
localization decision making approach.
The increase of the number of sensor nodes when the spacing distance is reduced poses the
problem of how optimal can the grouping of sensors in the sensor node be organized. Table
6.1 shows for example, for the building layout shown in Figure 5.12 that 7 extra nodes
(23.33%) will be required, when 3m spacing is used instead of 4m for the Hexagon pattern.
181
NLDP Hexagon Square
Spacing 3 4 5 3 4 5
distance (m)
Number of 37 30 18 36 20 18
sensor nodes
Difference 7 12 --- 16 2 ---
Table 6.1: Relation between spacing distance and number of sensor nodes
The concept of sensing overlapping, composed of single and double sensing zones, for
homogenous sensor nodes is used to evaluate the sensing overlapping shown shaded in
Figure 5.10 and in Table 6.2.
Relevant information can be extracted from Table 6.2 the ratio (Double/Single) sensing zone
is 2.65 times more important when using the Square geometric and only 0.52 time more
182
important when using the Hexagon geometric sensor node allocation pattern. This suggests
the preference for the square model, which results in sensing overlap simultaneously between
two operating sensor nodes. This sensing overlapping could result in some activity
redundancy (example of alarm stripping) which if detected can be processed at the node level
should one at least of the sensor nodes involved be active.
When the temperature of the sensed region read by one sensor node exceeds a number of
degrees of a pre-determined setting, taking into account the sensing tolerance value, the
following three actions are taken:
Show the behaviour of the different sensor nodes in terms of activation when a fire is
detected, and
Evaluate the integration in HIDSS of different models.
183
Figure 6.1: Fire emergency planning mode
The fire propagation knowledge domain is composed of several classes of algorithms used to
simulate the spread of fire modelled as a complex dynamic-system. This complexity is
characterised by a composition of combined and integrated sub-problems solved by using
analytical and computational behaviour models integrated in a simulation support tool based
on fire propagation behaviour prediction. Examples of sub-problems are:
The model implemented in the case study is just a simple prediction based on the use of the
fire temperature rise and expansion, without taking into account the different fire influencing
factors which include:
184
Checking the Data Validity and Accuracy
Of great importance is the checking of the data validity and accuracy and the understanding
of the distinction between ‘primary’ and ‘secondary sources’ of information; the primary
source being the WSN and the secondary being the fire propagation model. This data validity
checking aims at:
The predicted values provided by the fire propagation model are used by HIDSS to take the
following control actions:
Fire Progression
Real time fire progression data is available from the fire emergency planning model as
discussed above.
185
Figure 6.2: Real-time people localisation
. There is a read error rate associated to the use of RFID when supporting the functions of
localisation and tracking people and objects. These errors are due to the fact that the
integrated RFID-WSN are complex hybrid systems, consisting of analogue and digital
hardware, and software components with some security issues that include:
186
Figure 6.3: Evacuation requirements mode
This model can be used in the emergency preparedness to support the evaluation risk
assessment. As it can be seen from room flow diagram, the theoretical evacuation time at the
room safety exits is an indication of the evacuation bottleneck that would require appropriate
support actions.
The problem of modelling evacuation routes from a building and out of an affected area is
complex in the sense that the pre-defined set of exit points, illustrated in Figure 6.4, can be
affected by the declared hazard and be a restricted area. The route finding process is the
interpretation of the conditions contained in the evacuation risk assessment plan and their
187
evaluation in the context of the hazard occurring. Hazards include fire, building structures,
utilities and equipment failures and accidents.
The conditions-hazard context mapping aims at eliminating unsafe evacuation exits, and
identifying validated safety exits within the non-affected area which can be composed, for
example, by determining plausible exits using an efficient heuristic algorithm as a reference
for comparative analysis and optimal time and safety calculation. Optimal safety calculation
can include the potential hazard risks and the evaluation of their impact on people's health.
Pre-defined exits include: External doors and windows, Internal doors, and Additional
specific exit points that include roofs, ground floors, and others.
188
Figure 6.5: Example of evacuation route and time
189
Walk from point of localisation to the room door, and then from door to door till
reaching the external door,
Collision detection requires object avoidance and re-routing observing rules
mentioned above, and
Although the walking path during escape can be random and disordered, straight movement
has been used in this implementation to simplify the random successive steps and reduce the
walk disorder. The navigation of people during the building evacuation is based on the
division of the evacuated area in evacuation zones (rooms), defining instructions to follow
using a pre-defined route.
Evacuation Strategy
The full implementation in the real world of the evacuation model will include giving escape
instructions via individual RFID tags or heterogeneous devices. In the case study proposed in
this research, its implementation is based on the generation of evacuation routes for each
evacuation zone or room.
The model supports the evacuation strategy that maximizes the number of people reaching
safety exits and also minimizes the total evacuation time. It aggregates the different zone
evacuation paths, and this composition is an interactive and iterative real time process that
takes into account the changing evacuation requirements which are affected by the hazard
development and the dynamic conditions evacuation taking place.
d) WSN Configuration
The WSN configuration is an iterative process composed of the following four steps:
Sensor node selection,
Sensor node configuration and adjustment,
Sensor node clustering, and
Cluster aggregation.
190
Configuration operation Options
Sensor node configuration o Energy supply use point
o Doors & windows sensor nodes located
o Sensor node
o Room usage
o Room
o Building
Sensor node clustering o Building
o Room
o Spacing distance
Clusters aggregation o Distance
o Size
WSN generation o Sequential
o Sequential alternate
o Router per room
o Router per room alternate
o Router per sensor node cluster
o Router per sensor node cluster alternate
o Homogeneous vs heterogeneous
o Homogeneous vs heterogeneous alternate
These options enable a thorough evaluation of configuration policy shown in Figure 6.6, that
191
is implemented to support a real-time sensor node and ad hoc WSN configuration and re-
configuration when performance and security issues are identified. The implementation of the
WSN configuration strategy has resulted in the following two WSN configurations. Different
cluster aggregation configurations can be generated by varying the aggregation index
expressing the cluster size difference to increase the network homogeneity.
The diagram in Figure 6.6 shows a sensor node distribution into clusters with a maximal size
difference of 6 sensor nodes. Cluster aggregation consists of reducing the node size
difference between clusters to make them less or more homogeneous.
The diagram in Figure 6.7 shows the result of the cluster aggregation reducing the number of
clusters to four, and a maximal size difference between clusters of four clusters.
e) `WSN Generation
The WSN generation is the second level of WSN configuration which requires the use of a
generation policy to support the network variance homogeneity versus heterogeneity, as
illustrated in Figure 6.8. This process is performed using the allocation options described in
WSN generation in Table 6.3, to distribute the sensor nodes between the different WSN
routers. The lifespan of the WSN is increased by ensuring a homogeneous distribution of:
192
Nodes in the clusters, and
Clusters in the network.
193
6.4.4.1 Data Volumes, Visualisation and Backup
Although there are many third party WSN tools for visualisation, the heterogeneous data
emitted by individual homogeneous and heterogeneous sensor nodes and collected by a
gateway and forwarded to a base station connected to a server, is verified and stored by
HIDSS in the WSN database from where it can be compiled and visualised. The
implementation in the real world of HIDSS will result in large data volumes being needed for
the deployment of context service web-based applications.
The testing operated in the case study has shown a low volume of data. The visualisation and
backup were supported by the different knowledge functions and the database management
functions of HIDSS as explained and shown in the previous chapter. The sensor nodes and
WSN architecture is visualised to show the configuration status of the network, and also
wherever it was required when invoked in the processing of the different implemented
knowledge functions.
6.4.4.2 In-Networking
In networking is concerned with deploying active sensor nodes with storage and computing
capabilities with the aim of real-time sensing and tracking control and the reduction of data
communication volumes in the network. The configuration of the network evaluation kit,
composed only of a few sensor nodes, has not allowed the testing and evaluation of in-
networking.
Configuration audit to ensure that the sensor node device and their individual
components meet the sensing or tracking requirements, and
194
Tracking configuration changes requiring reverting or undoing the operated changes.
HIDSS supports the tracking of the sensor nodes and WSN control configuration and re-
configuration, during the network first deployment, repair and upgrade.
Although there are several security tools incorporating anomaly detection functionalities that
can be integrated as third party tools, HIDSS can interact with them to enhance intrusion
detection and assess impact. On the other hand, software inventory management is a full scale
HIDSS specification for the logging and monitoring of knowledge functions processing.
The network routing knowledge functions have not been modelled, and the network
management software testing for an ad hoc WSN is not included in the case study evaluation.
195
and in-network processing, particularly the network architecture and the communication
support.
The system performance is concerned with the measured system performance, the interactive
system performance, and the determination of causes of the system malfunctioning.
However, of great importance in the measured system performance is the quality of service
associated with:
The sensing data reading, requiring of the reading frequency adjustment when
required,
The sensing coverage that integrates both faulty sensing devices or low signal,
requiring the network re-configuration, and
The data communication and processing within and outside the network, requiring
associating the quality of service to both each sensor node and its corresponding
router.
This quality of service can be used to predict the system performance and error rates
regarding the above aspects mentioned.
196
Situations that plague users when the system doesn't respond appropriately to the
users expectations, occur when knowledge processes are badly modelled or coded, or
result from latency, showing unexpected and/or non aggregated results,
Normal situations which can convey uncertainty or imprecision, and that can lead to
inappropriate system reactions, and
The first group of situations have been handled over the repeated system testing, whereas the last two
groups of situations are complementary in the sense that situations which convey uncertainty or
imprecision, would require the incorporation of additional processes to reduce the uncertainty
and/or imprecision. This can be illustrated by the following examples:
o The sensor has read, but not connected to the WSN (not configured or low
signal), and
A sensor data can be interpreted in the different following ways, mainly in the case of
detectors emitting sounds or lights to indicate the critical values reach. This situation
requires additional checks to reduce false alarms which might result in inappropriate
actions. An example of false alarm conflict resolution is presented in Section 4.5.2
(Complex Generic Knowledge Tasks).
197
These systems are connected to fire detection and alarm systems, contrary to HIDSS that
integrates both functions planning evacuation, fire detection and evacuation. They rely on
autonomous fire detectors that convey the risks of malfunctioning that include both false
alarms or not breaking out in the case of fire occurrence. However, they automatically
trigger a voice alarm using class D amplifiers, and focus on the quality of messages, ensuring
that evacuation messages are not misinterpreted or disregarded, containing clear instructions
that can be immediately understood. In addition to using optical orientation aids in the form
of escape route lights and display panels to enhance the route guidance escape, they
incorporate pre-programmed evacuation processes that will be activated according to the
evacuation plan selected by the rescue team as agreed upon. The combined use of optimal
orientation aids and voice messages aims at reducing the impact of visual and hearing
impairment, and excessive noise and lack of visibility.
These systems support automatic phase driven evacuation, and are based on programmable
control logic and digital signal processing. They can deliver simultaneously a variety of
personalised messages corresponding to different hazards and scenarios requiring people
evacuation from different zones at a time. They are based on the use of multiple audio
communication channels with loop redundant network for uninterrupted information
exchange to deliver both pre-recorded and live announcement messages, by the means of
speakers and paging. These systems are operated manually by authorised people that support
re-routing the evacuation when fire spreads and escape routes are no longer passable. They
are based on LCD display with touch-screen, and automated procedures synchronised with
fire detection and alarm systems. They can be connected to a computer to import a ready-
made configuration, and connect to similar systems and also to fire-fighters phones, in a
network configuration, managed by networked control units.
Identifying room exits, and associating one or more evacuation plans per room in the
monitored premises as illustrated in Figure 6.9:
198
Figure 6.9: Exits and evacuation plans specifications.
Specifying the nature and type of wall portion to enable the identification of exits and
potential hazards associated to the building elements,
Specifying the building navigation routes and flows in accordance with the room
attendance capacity as shown in Figure 6.10.
Figure 6.10: Heterogeneous devices specifications, and navigation and flow model.
199
o Deploy the room sprinklers,
Identifying and tracking individually people equipped with wrist bands that integrate
RFID and WSN technologies and linked to a database containing impairment data
(hearing, sight and mobility) to:
This comparative evaluation is based on showing the differences about the support
capabilities required at the different stages of emergency systems: planning, surveillance and
hazard mitigation. These capabilities are inherent to the systems performance and timing,
mainly the time saving when using evacuations data models that include predictive data.
6.7 Summary
The evaluation of the system prototype has been limited to the testing of implemented
functional components leading to the data capture and processing that compose the first step
of the WSN data fusion process. The testing has revealed additional requirements that extend
the design and development of HIDSS to derive new specifications for the support of other
knowledge processes of the sensing - tracking domain, and knowledge rules elaboration for
problem solving modelling (models) required for the generic iterative development and
implementation of domain context applications.
This evaluation has not allowed the testing of all the problems inherent to the configuration
and deployment of WSN in the real world. However, the case study has produced conclusive
200
results that demonstrate the effective implementation of the knowledge entity model and
confirm the practicality of the conceptual generic design framework for hybrid intelligent
decision support systems. This framework illustrates the integration of WSN and RFID that
extends the combined use of smart devices and contributes to the enhancement of the process
integration of data, knowledge, and model functions in a multi-agent web based service
architecture system.
201
Chapter 7: Conclusions and Future Work
7.1 Conclusions
The work presented in this research started with the idea of how best to design integrated
web-services composed of intelligent agents interacting with each other to control dynamic
data-context aware environments. The study of the research domain has exposed the
importance of the specific characteristics of data context aware environments, which include
real time heterogeneous data capture and dynamic collaborative decision making that enable
intelligent analysis and effective real time response, mainly in emergency response situations.
The real time heterogeneous data capture results from the deployment of smart homogeneous
and heterogeneous devices designed and built around an integrated WSN-RFID technology,
and wirelessly connected to an ad hoc WSN forming a network mesh. This network, which
requires configuration prior to deployment, became an essential element of the generic design
conceptual framework supported by a hybrid intelligent decision support system.
The need for such a support system is justified by the complexity of the analytical modelling
of dynamic and collaborative knowledge and decision making processes that are supported by
the integration of data, knowledge and models. These three elements are essential for the
study of knowledge domains and their structure into knowledge processes aimed at data and
knowledge acquisition, in the context of data and information fusion.
The different functional components supporting the sensor nodes and WSN data fusion, and
also information and knowledge fusion, have been articulated in a distributed web-based
multi-agent system that aims at supporting several context applications. This system
architecture has taken into account the necessary decoupling of the data capture and
processing from the process of knowledge discovery and refinement, imposed by the
hierarchy of the functional links of the knowledge domain study and the methodological
approach used for its decomposition and analysis.
This research has covered several hybrid domains: integrated technology sensor node
devices, homogeneous versus heterogeneous networks, processes, automated versus manual
decisions, different classes and types of agents and services. These different domains
integrated in the proposed conceptual framework demonstrates that integrated RFID-WSNs,
coupled with the use of homogenous and heterogeneous smart devices integrating readers,
202
can increase the robustness, accuracy and reliability of data in the intelligent monitoring of
smart buildings.
The proposed hybrid intelligent decision support system is an integrated prototype system
aimed at supporting the different steps of the knowledge domain study to derive collaborative
domain knowledge for problem solving and decision making. This prototype system has been
implemented on a case study covering an important aspect of the indoor sensing and tracking
knowledge domain which has several application interests: hospitals, museums, nurseries,
schools, hotels, prisons, and others. These applications share a common knowledge domain
that is concerned with the indoor sensing of the environment and the localisation and tracking
of people and goods within closed premises.
The system prototype has been evaluated firstly from the perspective of validating and
enriching the system requirements and the specifications for the definition of the information
and knowledge fusion support functions or services, and then assessing the results of the
implemented functional components.
Fusion processes need to be fully automated, benefiting from intelligent fusion systems
supported by dynamic distributed multi-agent systems processing a variety of soft and
hard data collected from very diverse sources.
The extraction and fusing of information and knowledge need the investigation of real-
time support for decision level fusion performance improvement in several domains, such
as multi-source soft and hard data fusion, computational complexity and multi-level
inference or evidence fusion.
203
References
[1] I. Haritaoglu, D. Harwood and L. S. Davis, "W 4: Real-time surveillance of people and
their activities," Pattern Analysis and Machine Intelligence, IEEE Transactions On, vol. 22,
pp. 809-830, 2000.
[4] G. Button, J. Coulter, J. R. Lee and W. Sharrock, "Computers, minds, and conduct," 1995.
[5] A. Clark, Being there: Putting Brain, Body, and World Together Again. MIT press, 1997.
[7] M. C. Wicks, "Radar the next generation-sensors as robots," in Radar Conference, 2003.
Proceedings of the International, 2003, pp. 8-14.
[8] F. Gini and M. Rangaswamy, Knowledge Based Radar Detection, Tracking and
Classification. John Wiley & Sons, 2008.
[9] P. Azad and V. Sharma, "Cluster head selection in wireless sensor networks under fuzzy
environment," International Scholarly Research Notices, 2013.
[10] C. Fry and M. Nystrom, Security Monitoring. " O'Reilly Media, Inc.", 2009.
[11] J. R. Raol and A. K. Gopal, Mobile Intelligent Autonomous Systems. CRC Press, 2012.
[12] R. Munro, "Actor-network theory," The SAGE Handbook of Power. London: Sage
Publications Ltd, pp. 125-139, 2009.
[15] S. K. Singh, M. Singh and D. Singh, "A survey of energy-efficient hierarchical cluster-
based routing in wireless sensor networks," International Journal of Advanced Networking
and Application (IJANA), vol. 2, pp. 570-580, 2010.
204
[17] S. Tilak, N. Abu-Ghazaleh and W. B. Heinzelman, "Storage management in wireless
sensor networks," Mobile, Wireless, and Sensor Networks, pp. 257, 2006.
[18] A. Woo, S. Madden and R. Govindan, "Networking support for query processing in
sensor networks," Communications of the ACM, vol. 47, pp. 47-52, 2004.
[19] J. Steffan, L. Fiege, M. Cilia and A. Buchmann, "Towards multi-purpose wireless sensor
networks," in 2005, pp. 336-341.
[20] P. J. del Cid, S. Michiels, W. Joosen, and D. Hughes, "Middleware for resource sharing
in multi-purpose wireless sensor networks," in 2010, pp. 1-8.
[21] P. J. del Cid, D. Hughes, S. Michiels and W. Joosen, "Expressing and configuring
quality of data in multi-purpose wireless sensor networks," in Sensor Systems and Software
Anonymous Springer, 2011, pp. 91-106.
[22] D. Halliday, R. Resnick and J. Walker, Fundamentals of Physics. New York: Wiley,
1997.
[23] J. Landt, "The history of RFID," Potentials, IEEE, vol. 24, pp. 8-11, 2005.
[24] Anonymous "Hacking exposed Linux; Linux security secrets & solutions, 3d" SciTech
Book News, vol. 32, 2008.
[27] Sumita Nainan, Romin Parekh and Tanvi Shah, "RFID Technology Based Attendance
Management System," International Journal of Computer Science Issues (IJCSI), vol. 10, pp.
516-521, 2013.
[28] SATO Label Gallery, "Label gallery RFID technology white paper," SATO
Corporation, U.S, 2012.
[29] K. Sohraby, D. Minoli and T. Znati, Wireless Sensor Networks: Technology, Protocols,
and Applications. John Wiley & Sons, 2007.
[30] Sarita Pais and Judith Symonds, "Data Storage on a RFID Tag for a Distributed
System," International Journal of UbiComp, vol. 2, pp. 26-39, 2011.
[32] A. S, "Which RFID Frequency is Right for Your Application? ," 2012.
[33] S. Smiley, "The Best RFID Tag For Your Application: 7 Key Factors to Consider,"
2014.
205
[34] H. Chu, G. Wu, J. Chen and Y. Zhao, "Study and simulation of semi-active RFID tags
using piezoelectric power supply for mobile process temperature sensing," in Cyber
Technology in Automation, Control, and Intelligent Systems (CYBER), 2011 IEEE
International Conference On, 2011, pp. 38-42.
[36] Anonymous "The Internet of things; from RFID to the next-generation pervasive
networked systems," Scitech Book News, vol. 32, 2008.
[37] S. Kim, W. Ryu and B. Hong, "Transaction processing for complete reading of semi-
passive sensor tag," in Sensor Technologies and Applications, 2009. SENSORCOMM'09.
Third International Conference On, 2009, pp. 683-688.
[38] S. Müller and E. SA, "Getting around the technical issues with battery-assisted UHF
RFID tags," Wireless Design Magazine, vol. 16, pp. 28-30, 2008.
[39] Y. Z. Zhang, S. L. Zhang, L. H. Mao and S. Xie, "A fully integrated 2.45 G semi-active
RFID tag IC," in Applied Mechanics and Materials, 2012, pp. 1586-1590.
[41] (). Information Technology Radio-Frequency Identification for Item Management, Part
7: Parameters for Active Air Interface Communications at 433 MHz.
[42] T. Van Le, M. Burmester and B. De Medeiros, "Universally composable and forward-
secure RFID authentication and authenticated key exchange," in Proceedings of the 2nd ACM
Symposium on Information, Computer and Communications Security, 2007, pp. 242-252.
[43] M. Spruit and W. Wester, "RFID Security and Privacy: Threats and Countermeasures,"
2013.
[44] H. Liu, M. Bolic, A. Nayak and I. Stojmenovi, "Integration of RFID and wireless sensor
networks," Proceedings of Sense ID 2007 Worksop at ACN SenSys, pp. 6-9, 2007.
[45] L. Zhang and Z. Wang, "Integration of RFID into wireless sensor networks:
Architectures, opportunities and challenging problems," in Grid and Cooperative Computing
Workshops, 2006. GCCW'06. Fifth International Conference On, 2006, pp. 463-469.
206
[47] S. Cheekiralla and D. W. Engels, "A functional taxonomy of wireless sensor network
devices," in Broadband Networks, 2005. BroadNets 2005. 2nd International Conference On,
2005, pp. 949-956.
[48] O. Karaca, R. Sokullu, N. R. Prasad and R. Prasad, "Application Oriented Multi Criteria
Optimization in WSNs Using on AHP," Wireless Personal Communications, vol. 65, pp.
689-712, 2012.
[49] N. Srivastava, "Challenges of Next-Generation Wireless Sensor Networks and its impact
on Society," 2010.
[50] S. Pudlewski and T. Melodia, "Compressive Video Streaming: Design and Rate-Energy-
Distortion Analysis," Multimedia, IEEE Transactions On, vol. 15, pp. 2072-2086, 2013.
[51] L. Torsten Berger, P. Pagani, A. Schwager and D. Schneider, "MIMO power line
communications," 2014.
[52] A. Sharif, V. Potdar and E. Chang, "Wireless multimedia sensor network technology: A
survey," in 2009, pp. 606-613.
[54] C. Fok, G. Roman and C. Lu, "Enhanced coordination in sensor networks through
flexible service provisioning," in Coordination Models and Languages, 2009, pp. 66-85.
[56] X. Wang, X. Wang, L. Liu and G. Xing, "DutyCon: A dynamic duty-cycle control
approach to end-to-end delay guarantees in wireless sensor networks," ACM Transactions on
Sensor Networks (TOSN), vol. 9, pp. 1-33, 2013.
[57] A. Lachenmann, P. Marrón, D. Minder and K. Rothermel, "Meeting lifetime goals with
energy levels," in 2007, pp. 131-144.
[59] Song Gao, Alvin S. Lim and Brandon Keith Maharrey, "Interconnection between IP
Networks and Wireless Sensor Networks," International Journal of Distributed Sensor
Networks, vol. 2012, 2012.
207
[61] A. Arora, P. Dutta, S. Bapat, V. Kulathumani, H. Zhang, V. Naik, V. Mittal, H. Cao, M.
Demirbas and M. Gouda, "A line in the sand: a wireless sensor network for target detection,
classification, and tracking," Computer Networks, vol. 46, pp. 605-634, 2004.
[62] J. Kumagai, "The Secret Life of Birds," IEEE Spectrum [H.W. Wilson - AST], vol. 41,
pp. 42, 2004.
[66] R. Riem-Vis, "Cold chain management using an ultra low power wireless sensor
network," Wames 2004, 2004.
[69] V. Shnayder, B. Chen, K. Lorincz, T. R. F. Jones and M. Welsh, "Sensor networks for
medical care," in SenSys, 2005, pp. 314-314.
208
[74] C. Guestrin, P. Bodik, R. Thibaux, M. Paskin and S. Madden, "Distributed regression:
An efficient framework for modelling sensor network data," in Information Processing in
Sensor Networks, 2004. IPSN 2004. Third International Symposium On, 2004, pp. 1-10.
[76] M. Pourahmadi, Foundations of Time Series Analysis and Prediction Theory. John
Wiley & Sons, 2001.
[77] F. Xing and W. Wang, "Understanding dynamic denial of service attacks in mobile ad
hoc networks," in Military Communications Conference, 2006. MILCOM 2006. IEEE, 2006,
pp. 1-7.
[78] H. Chan, A. Perrig, B. Przydatek and D. Song, "SIA: Secure information aggregation in
sensor networks," Journal of Computer Security, vol. 15, pp. 69-102, 2007.
[79] R. Di Pietro, P. Michiardi and R. Molva, "Confidentiality and integrity for data
aggregation in WSN using peer monitoring," Security and Communication Networks, vol. 2,
pp. 181-194, 2009.
[82] E. Zanaj, M. Baldi and F. Chiaraluce, "Efficiency of the gossip algorithm for wireless
sensor networks," in Software, Telecommunications and Computer Networks, 2007. SoftCOM
2007. 15th International Conference On, 2007, pp. 1-5.
[84] S. Goonatilake and S. Khebbal, Intelligent Hybrid Systems. John Wiley & Sons, Inc.,
1994.
[85] L. A. Zadeh, "The roles of fuzzy logic and soft computing in the conception, design and
deployment of intelligent systems," in Software Agents and Soft Computing Towards
Enhancing Machine Intelligence Anonymous Springer, 1997, pp. 181-190.
[86] M. Wooldridge, N. R. Jennings and D. Kinny, "The Gaia methodology for agent-
oriented analysis and design," Autonomous Agents Multi-Agent Syst., vol. 3, pp. 285-312,
2000.
[87] S. J. Russell and P. Norvig, Artificial Intelligence: A Modern Approach. Harlow, Essex:
Pearson, 2014.
209
[88] M. Wooldridge, An Introduction to Multiagent Systems. John Wiley & Sons, 2009.
[90] H. S. Nwana, "Software agents: An overview," The Knowledge Engineering Review, vol.
11, pp. 205-244, 1996.
[93] Buche Cédric, Querrec Ronan, De Loor Pierre and Chevaillier Pierre, "MASCARET: A
Pedagogical Multi-Agent System for Virtual Environments for Training," International
Journal of Distance Education Technologies (IJDET), vol. 2, pp. 41-61, 2004.
[94] E. Weitnauer, N. M. Thomas, F. Rabe and S. Kopp, "Intelligent agents living in social
virtual environments–bringing max into second life," in Intelligent Virtual Agents, 2008, pp.
552-553.
[97] L. Amgoud, Y. Dimopoulos and P. Moraitis, "A unified and general framework for
argumentation-based negotiation," in 2007, pp. 1-8.
[98] M. Caminada and L. Amgoud, "On the evaluation of argumentation formalisms," Artif.
Intell., vol. 171, pp. 286-310, 2007.
[99] W. Pohl and A. Nick, Machine Learning and Knowledge Representation in the Labour
Approach to User Modelling. Springer, 1999.
[103] F. Klügl, R. Hatko, M. V. Butz, Akademin för naturvetenskap och teknik and Örebro
universitet, "Agent learning instead of behaviour implementation for simulations – A case
210
study using classifier systems," in Anonymous Berlin, Heidelberg: Springer Berlin
Heidelberg, 2008, pp. 111-122.
[107] J. M. Serrano and S. Ossowski, "On the impact of agent communication languages on
the implementation of agent systems," in Cooperative Information Agents VIII Anonymous
Springer, 2004, pp. 92-106.
[113] R. H. Bordini, da Rocha Costa, Antonio Carlos and J. F. Hbner, "MAS-SOC: a social
simulation platform based on agent-oriented programming," Journal of Artificial Societies
and Social Simulation, vol. 8, 2005.
[114] S. Hassan, L. Garmendia and J. Pavón, "Agent-based social modeling and simulation
with fuzzy sets," in Innovations in Hybrid Intelligent Systems Anonymous Springer, 2007,
pp. 40-47.
[115] R. Inglehart, P. Norris and H. A. (. Doughty, "Sacred and Secular: Religion and Politics
Worldwide," College Quarterly, vol. 8, 2005.
[117] C. Guttmann and I. Zukerman, "Agents with limited modelling abilities: Implications
on collaborative problem solving," Computer Systems Science and Engineering, vol. 21, pp.
183, 2006.
211
[118] T. Malone and K. Crowston, "The interdisciplinary study of coordination," ACM
Computing Surveys (CSUR), vol. 26, pp. 87-119, 1994.
[122] C. Nikolai and G. Madey, "Tools of the trade: A survey of various agent based
modelling platforms," Journal of Artificial Societies and Social Simulation, vol. 12, pp. 2,
2009.
[130] K. Sycara, J. A. Giampapa, B. Langley and M. Paolucci, "The RETSINA MAS, a case
study," in Software Engineering for Large-Scale Multi-Agent Systems Anonymous Springer,
2003, pp. 232-250.
212
[131] H. Yeh, S. Curtis, S. Patil, J. van den Berg, D. Manocha and M. Lin, "Composite
agents," in Proceedings of the 2008 ACM SIGGRAPH/Eurographics Symposium on
Computer Animation, 2008, pp. 39-47.
[132] H. van Essen, P. Rijnbout and M. de Graaf, "A design approach to decentralized
interactive environments," in Intelligent Technologies for Interactive Entertainment
Anonymous Springer, 2009, pp. 56-67.
[133] A. Voss and R. Procter, "Virtual research environments in scholarly work and
communications," Library Hi Tech, vol. 27, pp. 174-190, 2009.
[135] A. Ricci, C. Buda and N. Zaghini, "An agent-oriented programming model for SOA &
web services," in Industrial Informatics, 2007 5th IEEE International Conference On, 2007,
pp. 1059-1064.
[138] I. Foster, Yong Zhao, I. Raicu, and S. Lu, "Cloud computing and grid computing 360-
degree compared," in 2008, pp. 1-10.
[142] (). What is Inquiry Based Learning (EBL)? Centre for Excellence in Enquiry-Based
Learning.
[143] Yuen-Yan Chan, "Teaching queueing theory with an inquiry-based learning approach:
A case for applying webquest in a course in simulation and statistical analysis," in 2007, pp.
F3C-1-F3C-6.
213
[145] S. R. Quinton, "A cybernetic design methodology for ‘Intelligent’ Online learning
support," in Web-Based Support Systems Anonymous Springer, 2010, pp. 97-124.
[150] F. Burstein, J. Cowie, A. Zaslavsky and J. San Pedro, "Support for real-time decision
making in mobile financial applications," Information Systems and E-Business Management,
vol. 6, pp. 257-278, 2008.
[153] R. N. Anthony, Planning and Control Systems: A Framework for Analysis. Boston:
Harvard University, 1965.
[154] H. Simon, "The new science of management decision Harper & Brothers," New York,
1960.
[155] I. Moualek, "Intelligent decision support systems for interactive decision making in
complex environments applied to regional planning," 1997.
[156] L. Sun, X. Hu, Z. Wang and M. Huang, "A knowledge-based model representation and
on-line solution method for dynamic vehicle routing problem," in Computational Science–
ICCS 2007Anonymous Springer, 2007, pp. 218-226.
[157] D. J. Power, "Web-based and model-driven decision support systems: concepts and
issues," AMCIS 2000 Proceedings, pp. 387, 2000.
[158] W. Inmon and R. Hackathorn, "Building the Data Warehouse, John Wiley& Sons,"
1996.
[159] G. Zhang, J. Lu and Y. Gao, Multi-Level Decision Making: Models, Methods and
Applications. Springer, 2015.
214
[161] Data Migration Pro, "Data migration project checklist - a template for more effective
data migration planning," In Data Migration Pro Expert Journal, 2008.
[162] Denise Rogers, "3 Fundamental Questions about Data Marts...and Their Answers,"
Database Journal, 2010.
[163] H. Plattner, "A common database approach for OLTP and OLAP using an in-memory
column database," in Proceedings of the 2009 ACM SIGMOD International Conference on
Management of Data, 2009, pp. 1-2.
[166] C. Nyce and A. CPCU, "Predictive analytics white paper," American Institute for
CPCU.Insurance Institute of America, pp. 9-10, 2007.
[167] J. Han, M. Kamber and J. Pei, Data Mining: Concepts and Techniques: Concepts and
Techniques. Elsevier, 2011.
[168] D. Delen and H. Demirkan, "Data, information and analytics as services," Decision
Support Systems, vol. 55, pp. 359-363, 2013.
[169] IBM Software Thought Leadership White Paper, "Descriptive, predictive, prescriptive:
Transforming asset and facilities management," IBM Corporation, United States of America,
2013.
[170] B. Meyer and K. Sugiyama, "The concept of knowledge in KM: a dimensional model,"
Journal of Knowledge Management, vol. 11, pp. 17-35, 2007.
[171] G. Angioni, Fare, Dire, Sentire: L'Identico E Il Diverso Nelle Culture. Il maestrale,
2011.
[172] B. Meyer, W. Scholl and Z. Zhang, "Predicting task performance with elicitation of
non-explicit knowledge," in 4th Conference on Professional Knowledge Management–
Experiences and Visions, 2007, pp. 303-311.
[173] K. Niedderer and Y. Imani, "Developing a framework for managing tacit knowledge in
research using knowledge management models," 2009.
215
[176] M. K. Pinheiro, A. Carrillo-Ramos, M. Villanova-Oliver, J. Gensel and Y. Berbers,
"Context-aware adaptation in web-based groupware systems," in Web-Based Support Systems
Anonymous Springer, 2010, pp. 3-31.
[179] J. Lipnack and J. Stamps, "Virtual teams: The new way to work," Strategy &
Leadership, vol. 27, pp. 14-19, 1999.
[181] M. A. Marks, J. E. Mathieu and S. J. Zaccaro, "A temporally based framework and
taxonomy of team processes," Academy of Management Review, vol. 26, pp. 356-376, 2001.
[183] E. Turban, T. Liang and S. P. Wu, "A framework for adopting collaboration 2.0 tools
for virtual group decision making," Group Decision Negotiation, vol. 20, pp. 137-154, 2011.
[187] A. Ryan and P. Eklund, "A framework for semantic interoperability in healthcare: a
service oriented architecture based on health informatics standards." Stud. Health Technol.
Inform., vol. 136, pp. 759, 2008.
[190] J. Yao and J. P. Herbert, "Web-based support systems with rough set analysis," in
Anonymous Berlin, Heidelberg: Springer Berlin Heidelberg, 2007, pp. 360-370.
216
[191] S. Craw, J. Jarmulak and R. Rowe, "Maintaining Retrieval Knowledge in a Case‐Based
Reasoning System," Computing Inteligence., vol. 17, pp. 346-363, 2001.
[195] Informatica white paper, "Moving and synchronizing real-time data in a heterogeneous
environment," Informatica, United States of America, 2013.
[198] X. Zhu, Web-Based Virtual Environments for Decision Support in Water Systems.
UNESCO-IHE, Institute for Water Education, 2013.
[199] D. C. Bose, Principles of Management and Administration. PHI Learning Pvt. Ltd.,
2012.
[200] S. Oh and S. Park, "Task–role-based access control model," Information System, vol.
28, pp. 533-562, 2003.
[203] A. Sheth, C. Henson and S. S. Sahoo, "Semantic sensor web," Internet Computing,
IEEE, vol. 12, pp. 78-83, 2008.
[205] R. Bendadouche, C. Roussey, G. De Sousa, J. Chanet and K. M. Hou, "Etat de l’art sur
les ontologies de capteurs pour une intégration intelligente des données," in 2012, .
217
[206] M. Yarvis, N. Kushalnagar, H. Singh, A. Rangarajan, Y. Liu and S. Singh, "Exploiting
heterogeneity in sensor networks," in INFOCOM 2005. 24th Annual Joint Conference of the
IEEE Computer and Communications Societies. Proceedings IEEE, 2005, pp. 878-890.
[207] S. Rhee, D. Seetharam and S. Liu, "Techniques for minimizing power consumption in
low data-rate wireless sensor networks," in Wireless Communications and Networking
Conference, 2004. WCNC. 2004 IEEE, 2004, pp. 1727-1731.
[210] M. A. Vieira, A. B. da Cunha and da Silva Jr, Diógenes C, "Designing wireless sensor
nodes," in Embedded Computer Systems: Architectures, Modelling, and
SimulationAnonymous Springer, 2006, pp. 99-108.
[211] A. El Kateeb, A. Ramesh and L. Azzawi, "Wireless sensor nodes processor architecture
and design," in Advanced Information Networking and Applications-Workshops, 2008.
AINAW 2008. 22nd International Conference On, 2008, pp. 892-897.
[213] T. Hegazy and G. Vachtsevanos, "Sensor placement for isotropic source localization,"
in Information Processing in Sensor Networks, 2003, pp. 432-441.
[214] A. Dâmaso, D. Freitas, N. Rosa, B. Silva and P. Maciel, "Evaluating the power
consumption of wireless sensor network applications using models," Sensors, vol. 13, pp.
3473-3500, 2013.
[215] W. Christaller, How I Discovered the Theory of Central Places: A Report about the
Origin of Central Places. 1972.
218