IOT Unit-II Notes
IOT Unit-II Notes
An M2M area network comprises of machines( or M2M nodes) whiach have embedded
network modules for sensing, actuation and communicating various communiction
protocols can be used for M2M LAN such as ZigBee, Bluetooth, M-bus, Wireless M-Bus
etc., These protocols provide connectivity between M2M nodes within an M2M area
network.
The communication network provides connectivity to remote M2M area networks. The
communication network provides connectivity to remote M2M area network. The
communication networkcan use either wired or wireless network(IP based). While the
M2M are networks use either properietorary or non-IP baed communication protocols,
the communication network uses IP-based network. Since non-IP based protocols are
used within M2M area network, the M2M nodes within one network cannot
communicate with nodes in an externalnetwork.
To enable the communication between remote M2M are network, M2M gateways are
used.
Fig. Shows a block diagram of an M2M gateway. The communication between M2M nodes and
the M2M gateway is based on the communication protocols which are naive to the M2M are
network. M2M gateway performs protocol translations to enable Ip-connectivity for M2M are
networks. M2M gateway acts as a proxy performing translations from/to native protocols to/from
Internet Protocol(IP). With an M2M gateway, each mode in an M2M area network appears as a
virtualized node for external M2M area networks.
5) Applications
□ M2M data is collected in point solutions and can be accessed by on-premises
applications such as diagnosis applications, service management applications, and
on- premisis enterpriseapplications.
□ IoT data is collected in the cloud and can be accessed by cloud applications such
as analytics applications, enterprise applications, remote diagnosis and
management applications,etc.
1) Centralized NetworkController
With decoupled control and data planes and centralized network controller, the
network administrators can rapidly configure the network.
2) Programmable OpenAPIs
SDN architecture supports programmable open APIs for interface between the
SDN application and control layers (Northbound interface).
2) NFV Infrastructure(NFVI):
NFVI includes compute, network and storage resources that are virtualized.
Step 1: Purpose & Requirements Specification • The first step in IoT system design
methodology is to define the purpose and requirements of the system. In this step, the
system purpose, behavior and requirements (such as data collection requirements, data
analysis requirements, system management requirements, data privacy and security
requirements, user interface requirements, ...) are captured.
Step 2: Process Specification • The second step in the IoT design methodology is to define
the process specification. In this step, the use cases of the IoT system are formally described
based on and derived from the purpose and requirement specifications.
Step 3: Domain Model Specification • The third step in the IoT design methodology is to
define the Domain Model. The domain model describes the main concepts, entities and
objects in the domain of IoT system to be designed. Domain model defines the attributes of
the objects and relationships between objects. Domain model provides an abstract
representation of the concepts, objects and entities in the IoT domain, independent of any
specific technology or platform. With the domain model, the IoT system designers can get
an understanding of the IoT domain for which the system is to be designed.
Step 4: Information Model Specification • The fourth step in the IoT design methodology is
to define the Information Model. Information Model defines the structure of all the
information in the IoT system, for example, attributes of Virtual Entities, relations, etc.
Information model does not describe the specifics of how the information is represented or
stored. To define the information model, we first list the Virtual Entities defined in the
Domain Model. Information model adds more details to the Virtual Entities by defining their
attributes and relations.
Step 5: Service Specifications • The fifth step in the IoT design methodology is to define the
service specifications. Service specifications define the services in the IoT system, service
types, service inputs/output, service endpoints, service schedules, service preconditions and
service effects.
Step 6: IoT Level Specification • The sixth step in the IoT design methodology is to define
the IoT level for the system.
Step 7: Functional View Specification • The seventh step in the IoT design methodology is to
define the Functional View. The Functional View (FV) defines the functions of the IoT
systems grouped into various Functional Groups (FGs). Each Functional Group either
provides functionalities for interacting with instances of concepts defined in the Domain
Model or provides information related to these concepts.
Step 8: Operational View Specification • The eighth step in the IoT design methodology is to
define the Operational View Specifications. In this step, various options pertaining to the IoT
system deployment and operation are defined, such as, service hosting options, storage
options, device options, application hosting options, etc
Step 9: Device & Component Integration • The ninth step in the IoT design methodology is
the integration of the devices and components.
Step 10: Application Development • The final step in the IoT design methodology is to
develop the IoT application.
Embedded Computing Logic
It is essential to know about the embedded devices while learning the IoT or building the
projects on IoT. The embedded devices are the objects that build the unique computing
system. These systems may or may not connect to the Internet.
An embedded device system generally runs as a single application. However, these devices
can connect through the internet connection, and able communicate through other network
devices.
First developed in the 1960s for aerospace and the military, embedded computing systems
continue to support new applications through numerous feature enhancements and cost -
to-performance improvements of microcontrollers and programmable logic devices. Today,
embedded computing systems control everyday devices which we don’t generally think of
as “computers”: digital cameras, automobiles, smart watches, home appliances, and even
smart garments. These embedded computing systems are commonly found in consumer,
industrial, automotive, medical, commercial, and military applications.
Part of the designer’s responsibility involves being aware of trends in their particular
industry and taking advantage of relevant components and techniques . Let’s look for
examples among the top industries for microcontroller applications, the Internet of Things.
Embedded System Hardware
The embedded system can be of type microcontroller or type microprocessor. Both of these
types contain an integrated circuit (IC). The essential component of the embedded system is
a RISC family microcontroller like Motorola 68HC11, PIC 16F84, Atmel 8051 and many more.
The most important factor that differentiates these microcontrollers with the
microprocessor like 8085 is their internal read and writable memory. The essential
embedded device components and system architecture are specified below.
The embedded system that uses the devices for the operating system is based on the
language platform, mainly where the real-time operation would be performed.
Manufacturers build embedded software in electronics, e.g., cars, telephones, modems,
appliances, etc. The embedded system software can be as simple as lighting controls
running using an 8-bit microcontroller. It can also be complicated software for missiles,
process control systems, airplanes etc.
IoT devices are meant to be inexpensive, therefore the microcontroller needs to be chosen
so that its capabilities are not underutilized by the application. The microcontroller
specifications that determine the best part for your application are:
Bit depth: The register and data path width impacts the speed and accuracy with which
microcontrollers can perform non-trivial computations.
Memory: The amount of RAM and Flash in a microcontroller determines the code size and
complexity the component can support at full speed. Large memories have larger die area
and component cost.
GPIO: These are the microcontroller pins used to connect to sensors and actuators in the
system. These often share their functionality with other microcontroller peripherals, such as
serial communication, A/D, and D/A converters.
Power consumption: Power consumption is critically important for battery-operated devices
and it typically increases with microcontroller speed and memory size.
System on Chips
System on Chip in IoT designed by Redpine Signals is discussed below.This IoT SoC supports
WLAN, bluetooth and Zigbee systems on a single chip. It also supports 2.4 and 5GHz radio
frequencies.
As we know IoT is the technology which will provide communication between things,
between things and people using internet and IP enabled protocols. As we have seen in IoT
tutorial any IoT compliant system will have two major parts viz. front end and back end.
Front end provides connectivity with physical world and consists of sensors while backend
consists of processing and network connectivity interfaces.
Typical IoT system on chip support more than one RATs (Radio Access Technologies). It will
have following modules.
• Transmit and receive switch.
• RF part mainly consists of Trasmitter, receiver, oscillator and amplifiers.
• Memoriesi.e. Program memory, data memory to store the code and data
• Physical layer(baseband processing) either on FPGA or on processor based on complexity
and latency requirement.
• MAC layer and upper protocol stacks TCP/IP etc. running on processor
• ADC and DAC to provide interface between digital baseband and analog RF portions.
• Various interfaces such as SDIO, USB, SPI etc to provide interface with the host.
• Other peripherals such as UART, I2C, GPIO, WDT etc. to use the IoT SoC for various
connections.
As IoT system on chip supports multiple wireless protocols and RF hardware to support
multiple frequency bands, following factors need to be carefully analyzed and to be
optimized.
• Power-consumption
•Data-throughput
• Device-size
• Performance in terms of latency and other factors
Figure depicts one such IoT System on Chip model no. RS9113,which has been designed and
developed by Redpine Signals recently. It supports WLAN (802.11n), Bluetooth version 4.0
and Zigbee (802.15.4-2006) in the same chip. Hence the IoT device can be connected with
any of the said wireless technology based networks.
This IoT SoC (system on chip in IoT) can be used for numerous applications as mentioned
below:
• Mobile
• M2M-Communication
• Thermostats
• Smart meters
• Home automation
• Health care devices and equipments
Building Blocks Of IoT
Four things form basic building blocks of the IoT system –sensors, processors, gateways,
applications. Each of these nodes has to have its own characteristics in order to form an
useful IoT system.
Figure: Simplified block diagram of the basic building blocks of the IoT
Sensors:
These form the front end of the IoT devices. These are the so-called “Things” of the
system. Their main purpose is to collect data from its surroundings (sensors) or give
out data to its surrounding (actuators).
These have to be uniquely identifiable devices with a unique IP address so that they
can be easily identifiable over a large network.
These have to be active in nature which means that they should be able to collect
real-time data. These can either work on their own (autonomous in nature) or can be
made to work by the user depending on their needs (user-controlled).
Examples of sensors are gas sensor, water quality sensor, moisture sensor, etc.
Processors:
Processors are the brain of the IoT system. Their main function is to process the data
captured by the sensors and process them so as to extract the valuable data from
the enormous amount of raw data collected. In a word, we can say that it gives
intelligence to the data.
Processors mostly work on real-time basis and can be easily controlled by
applications. These are also responsible for securing the data – that is performing
encryption and decryption of data.
Embedded hardware devices, microcontroller, etc are the ones that process the data
because they have processors attached to it.
Gateways:
Gateways are responsible for routing the processed data and send it to proper
locations for its (data) proper utilization.
In other words, we can say that gateway helps in to and fro communication of the
data. It provides network connectivity to the data. Network connectivity is essential
for any IoT system to communicate.
LAN, WAN, PAN, etc are examples of network gateways.
Applications:
Applications form another end of an IoT system. Applications are essential for proper
utilization of all the data collected.
These cloud-based applications which are responsible for rendering the effective
meaning to the data collected. Applications are controlled by users and are a
delivery point of particular services.
Examples of applications are home automation apps, security systems, industrial
control hub, etc.
In a nutshell, from the figure we can determine that the information gathered by the
sensing node (end node) is processed first then via connectivity it reaches the embedded
processing nodes that can be any embedded hardware devices and are processed there as
well. It then passes through the connectivity nodes again and reaches the remote cloud-
based processing that can be any software and is sent to the application node for the proper
applied usage of the data collected and also for data analysis via big data.
First, it acquires information with respect to basic resources (names, addresses and so on)
and related attributes of objects by means of automatic identification and perception
technologies such as RFID, wireless sensor and satellite positioning, in other words, the
sensors, RFID tags, and all other uniquely identifiable objects or "things" acquire real-time
information (data) with the virtue of a central hub like smartphones.
In the Physical layer, all the data collected by the access system (uniquely identifiable
"things") collect data and go to the internet devices (like smartphones). Then via
transmission lines (like fiber-optic cable) it goes to the management layer where all the data
is managed separately (stream analytics and data analytics) from the raw data. Then all the
managed information is released to the application layer for proper utilization of the data
collected.
IoT Architecture Layers
At the very bottom of IoT architecture, we start with the Sensors and Connectivity network
which collects information. Then we have the Gateway and Network Layer. Above which we
have the Management Service layer and then at the end, we have the application layer
where the data collected are processed according to the needs of various applications.
This layer consists of RFID tags, sensors (which are an essential part of an IoT system
and are responsible for collecting raw data). These form the essential “things” of an
IoT system.
Sensors, RFID tags are wireless devices and form the Wireless Sensor Networks
(WSN).
Sensors are active in nature which means that real-time information is to be
collected and processed.
This layer also has the network connectivity (like WAN, PAN, etc.) which is
responsible for communicating the raw data to the next layer which is the Gateway
and Network Layer.
The devices which are comprised of WSN have finite storage capacity, restricted
communication bandwidth and have small processing speed.
We have different sensors for different applications – temperature sensor for
collecting temperature data, water quality for examining water quality, moisture
sensor for measuring moisture content of the atmosphere or soil, etc.
As per the figure below, at the bottom of this layer, we have the tags which are the RFID
tags or barcode reader, above which we have the sensors/actuators and then the
communication networks.
Figure : Sensor, Connectivity and Network Layer
Gateways are responsible for routing the data coming from the Sensor, Connectivity
and Network layer and pass it to the next layer which is the Management Service
Layer.
This layer requires having a large storage capacity for storing the enormous amount
of data collected by the sensors, RFID tags, etc. Also, this layer needs to have a
consistently trusted performance in terms of public, private and hybrid networks.
Different IoT device works on different kinds of network protocols. All these
protocols are required to be assimilated into a single layer. This layer is responsible
for integrating various network protocols.
From the figure below, at the bottom, we have the gateway which is comprised of the
embedded OS, Signal Processors, and Modulators, Micro-Controllers etc. Above the gateway
we have the Gateway Networks which are LAN(Local Area Network), WAN(Wide Area
Network), etc.
This layer is used for managing IoT services. The management Service layer is
responsible for Securing Analysis of IoT devices, Analysis of Information (Stream
Analytics, Data Analytics), Device Management.
Data management is required to extract the necessary information from the
enormous amount of raw data collected by the sensor devices to yield a valuable
result of all the data collected. This action is performed in this layer.
Also, a certain situation requires an immediate response to the situation. This layer
helps in doing that by abstracting data, extracting information and managing the
data flow.
This layer is also responsible for data mining, text mining, service analytics, etc.
From the figure below, we can see that, management service layer has Operational Support
Service (OSS) which includes Device Modeling, Device Configuration and Management and
many more. Also, we have the Billing Support System (BSS) which supports billing and
reporting.
Also, from the figure, we can see that there are IoT/M2M Application Services which
includes Analytics Platform; Data – which is the most important part; Security which
includes Access Controls, Encryption, Identity Access Management, etc. ; and then we have
the Business Rule Management (BRM) and Business Process Management (BPM).
Application Layer
Application layer forms the topmost layer of IoT architecture which is responsible for
effective utilization of the data collected.
Various IoT applications include Home Automation, E-health, E-Government, etc.
From the figure below, we can see that there are two types of applications which are
Horizontal Market which includes Fleet Management, Supply Chain, etc. and on the
Sector-wise application of IoT we have energy, healthcare, transportation, etc.
Iot Platform
Thus, an IoT platform can be wearing different hats depending on how you look at it. It is
commonly referred to as middleware when we talk about how it connects remote devices
to user applications (or other devices) and manages all the interactions between the
hardware and the application layers. It is also known as a cloud enablement platform or IoT
enablement platform to pinpoint its major business value, that is empowering standard
devices with cloud-based applications and services. Finally, under the name of the IoT
application enablement platform, it shifts the focus to being a key tool for IoT developers.
IoT platforms originated in the form of IoT middleware, which purpose was to function as a
mediator between the hardware and application layers. Its primary tasks included data
collection from the devices over different protocols and network topologies, remote device
configuration and control, device management, and over-the-air firmware updates.
Modern IoT platforms go further and introduce a variety of valuable features into the
hardware and application layers as well. They provide components for frontend and
analytics, on-device data processing, and cloud-based deployment. Some of them can
handle end-to-end IoT solution implementation from the ground up.
IoT platform technology stack
In the four typical layers of the IoT stack, which are things, connectivity, core IoT features,
and applications & analytics, a top-of-the-range IoT platform should provide you with the
majority of IoT functionality needed for developing your connected devices and smart
things.
Your devices connect to the platform, which sits in the cloud or in your on-premises data
center, either directly or by using an IoT gateway. A gateway comes useful whenever your
endpoints aren’t capable of direct cloud communication or, for example, you need some
computing power on edge. You can also use an IoT gateway to convert protocols, for
example, when your endpoints are in LoRaWan network but you need them to
communicate with the cloud over MQTT.
An IoT platform itself can be decomposed into several layers. At the bottom there is the
infrastructure level, which is something that enables the functioning of the platform. You
can find here components for container management, internal platform messaging,
orchestration of IoT solution clusters, and others.
The communication layer enables messaging for the devices; in other words, this is where
devices connect to the cloud to perform different operations.
The following layer represents core IoT features provided by the platform. Among the
essential ones are data collection, device management, configuration management,
messaging, and OTA software updates.
Sitting on top of core IoT features, there is another layer, which is less related to data
exchange between devices but rather to processing of this data in the platform. There is
reporting, which allows you to generate custom reports. There is visualization for data
representation in user applications. Then, there are a rule engine, analytics, and alerting for
notifying you about any anomalies detected in your IoT solution.
Importantly, the best IoT platforms allow you to add your own industry-specific components
and third-party applications. Without such flexibility adapting an IoT platform for a
particular business scenario could bear significant extra cost and delay the solution delivery
indefinitely.
Advanced IoT platforms
There are some other important criteria that differentiate IoT platforms between each
other, such as scalability, customizability, ease of use, code control, integration with 3rd
party software, deployment options, and the data security level.
Scalable (cloud native) – advanced IoT platforms ensure elastic scalability
across any number of endpoints that the client may require. This capability is
taken for granted for public cloud deployments but it should be specifically
put to the test in case of an on-premises deployment, including the platform’s
load balancing capabilities for maximized performance of the server cluster.
Customizable – a crucial factor for the speed of delivery. It closely relates to
flexibility of integration APIs, louse coupling of the platform’s components,
and source code transparency. For small-scale, undemanding IoT solutions
good APIs may be enough to fly, while feature-rich, rapidly evolving IoT
ecosystems usually require developers to have a greater degree of control
over the entire system, its source code, integration interfaces, deployment
options, data schemas, connectivity and security mechanisms, etc.
Secure – data security involves encryption, comprehensive identity
management, and flexible deployment. End-to-end data flow encryption,
including data at rest, device authentication, user access rights management,
and private cloud infrastructure for sensitive data – this is the basics of how
to avoid potentially compromising breaches in your IoT solution.
Cutting across these aspects, there are two different paradigms of IoT solution cluster
deployment offered by IoT platform providers: a public cloud IoT PaaS and a self-hosted
private IoT cloud.
IoT cloud enablement
An IoT cloud is a pinnacle of the IoT platforms evolution. Sometimes these two terms are
used interchangeably, in which case the system at hand is typically an IoT platform-as-a-
service (PaaS). This type of solution allows you to rent cloud infrastructure and an IoT
platform all from a single technology provider. Also, there might be ready-to-use IoT
solutions (IoT cloud services) offered by the provider, built and hosted on its infrastructure.
However, one important capability of a modern IoT platform consists in a private IoT cloud
enablement. As opposed to public PaaS solutions located at a provider’s cloud, a private IoT
cloud can be hosted on any cloud infrastructure, including a private data center. This type of
deployment offers much greater control over the new features development,
customization, and third-party integrations. It is also advocated for stringent data security
and performance requirements.
Board Types
FTDI-
Arduino Fio 3.3V 8MHz 14 8 6 1 Compatible
Header
In this chapter, we will learn about the different components on the Arduino board. We
will study the Arduino UNO board because it is the most popular board in the Arduino
board family. In addition, it is the best board to get started with electronics and coding.
Some boards look a bit different from the one given below, but most Arduinos have
majority of these components in common.
Power USB
Arduino board can be powered by using the USB cable from your computer. All
you need to do is connect the USB cable to the USB connection (1).
Voltage Regulator
The function of the voltage regulator is to control the voltage given to the
Arduino board and stabilize the DC voltages used by the processor and other
elements.
Crystal Oscillator
The crystal oscillator helps Arduino in dealing with time issues. How does
Arduino calculate time? The answer is, by using the crystal oscillator. The
number printed on top of the Arduino crystal is 16.000H9H. It tells us that the
frequency is 16,000,000 Hertz or 16 MHz.
Arduino Reset
You can reset your Arduino board, i.e., start your program from the beginning.
You can reset the UNO board in two ways. First, by using the reset button (17)
on the board. Second, you can connect an external reset button to the Arduino
pin labelled RESET (5).
Analog pins
The Arduino UNO board has six analog input pins A0 through A5. These pins
can read the signal from an analog sensor like the humidity sensor or
temperature sensor and convert it into a digital value that can be read by the
microprocessor.
Main microcontroller
Each Arduino board has its own microcontroller (11). You can assume it as the
brain of your board. The main IC (integrated circuit) on the Arduino is slightly
different from board to board. The microcontrollers are usually of the ATMEL
Company. You must know what IC your board has before loading up a new
program from the Arduino IDE. This information is available on the top of the
IC. For more details about the IC construction and functions, you can refer to
the data sheet.
ICSP pin
Mostly, ICSP (12) is an AVR, a tiny programming header for the Arduino
consisting of MOSI, MISO, SCK, RESET, VCC, and GND. It is often referred to as
an SPI (Serial Peripheral Interface), which could be considered as an
"expansion" of the output. Actually, you are slaving the output device to the
master of the SPI bus.
TX and RX LEDs
On your board, you will find two labels: TX (transmit) and RX (receive). They
appear in two places on the Arduino UNO board. First, at the digital pins 0 and
1, to indicate the pins responsible for serial communication. Second, the TX
and RX led (13). The TX led flashes with different speed while sending the
serial data. The speed of flashing depends on the baud rate used by the board.
RX flashes during the receiving process.
Digital I/O
The Arduino UNO board has 14 digital I/O pins (15) (of which 6 provide PWM
(Pulse Width Modulation) output. These pins can be configured to work as
input digital pins to read logic values (0 or 1) or as digital output pins to drive
different modules like LEDs, relays, etc. The pins labeled “~” can be used to
generate PWM.
AREF
AREF stands for Analog Reference. It is sometimes, used to set an external
reference voltage (between 0 and 5 Volts) as the upper limit for the analog
input pins.
After learning about the main parts of the Arduino UNO board, we are ready to learn how
to set up the Arduino IDE. Once we learn this, we will be ready to upload our program on
the Arduino board.
In this section, we will learn in easy steps, how to set up the Arduino IDE on our computer
and prepare the board to receive the program via USB cable.
Step 1 − First you must have your Arduino board (you can choose your favorite board) and
a USB cable. In case you use Arduino UNO, Arduino Duemilanove, Nano, Arduino Mega
2560, or Diecimila, you will need a standard USB cable (A plug to B plug), the kind you
would connect to a USB printer as shown in the following image.
In case you use Arduino Nano, you will need an A to Mini-B cable instead as shown in the
following image.
Here, we are selecting just one of the examples with the name Blink. It turns the LED on
and off with some time delay. You can select any other example from the list.
Step 6 − Select your Arduino board.
To avoid any error while uploading your program to the board, you must select the correct
Arduino board name, which matches with the board connected to your computer.
Go to Tools → Board and select your board.
Here, we have selected Arduino Uno board according to our tutorial, but you must select
the name matching the board that you are using.
Step 7 − Select your serial port.
Select the serial device of the Arduino board. Go to Tools → Serial Port menu. This is likely
to be COM3 or higher (COM1 and COM2 are usually reserved for hardware serial ports). To
find out, you can disconnect your Arduino board and re-open the menu, the entry that
disappears should be of the Arduino board. Reconnect the board and select that serial
port.
Step 8 − Upload the program to your board.
Before explaining how we can upload our program to the board, we must demonstrate the
function of each symbol appearing in the Arduino IDE toolbar.
A − Used to check if there is any compilation error.
B − Used to upload a program to the Arduino board.
C − Shortcut used to create a new sketch.
D − Used to directly open one of the example sketch.
E − Used to save your sketch.
F − Serial monitor used to receive serial data from the board and send the serial data to the
board.
Now, simply click the "Upload" button in the environment. Wait a few seconds; you will see
the RX and TX LEDs on the board, flashing. If the upload is successful, the message "Done
uploading" will appear in the status bar.
The IoT concepts imply a creation of network of various devices interacting with each other
and with their environment. Interoperability and connectivity wouldn’t be possible without
hardware platforms that help developers solve issues such as building autonomous
interactive objects or completing common infrastructure related tasks.
Let’s go through the most popular IoT platforms and see how they work and benefit IoT
software developers.
Arduino
The Arduino platform was created back in 2005 by the Arduino company and allows for
open source prototyping and flexible software development and back-end deployment
while providing significant ease of use to developers, even those with very little experience
building IoT solutions.
Arduino is sensible to literally every environment by receiving source data from different
external sensors and is capable to interact with other control elements over various devices,
engines and drives. Arduino has a built-in micro controller that operates on the Arduino
software.
Projects based on this platform can be both standalone and collaborative, i.e. realized with
use of external tools and plugins. The integrated development environment (IDE) is
composed of the open source code and works equally good with Мac, Linux and Windows
OS. Based on a processing programming language, the Arduino platform seems to be
created for new users and for experiments. The processing language is dedicated to
visualizing and building interactive apps using animation and Java Virtual Machine (JVM)
platform.
Let's note that this programming language was developed for the purpose of learning basic
computer programming in a visual context. It is an absolutely free project available to every
interested person. Normally, all the apps are programmed in C/C++, and are
wrapped with avr-gcc (WinAVR in OS Windows).
Arduino offers analogue-to-digital input with a possibility of connecting light, temperature
or sound sensor modules. Such sensors as SPI or I2C may also be used to cover up to 99% of
these apps’ market.
Raspberry Pi
Raspberry Pi (/paɪ/) is a series of small single-board computers developed in the United
Kingdom by the Raspberry Pi Foundation in association with Broadcom. Early on, the
Raspberry Pi project leaned towards the promotion of teaching basic computer science in
schools and in developing countries. Later, the original model became far more popular
than anticipated,selling outside its target market for uses such as robotics. It is now widely
used in many areas, such as for weather monitoring,because of its low cost, modularity, and
open design.
After the release of the second board type, the Raspberry Pi Foundation set up a new entity,
named Raspberry Pi Trading, and installed Eben Upton as CEO, with the responsibility of
developing technology.The Foundation was rededicated as an educational charity for
promoting the teaching of basic computer science in schools and developing countries.The
Raspberry Pi is one of the best-selling British computers.
The Raspberry Pi hardware has evolved through several versions that feature variations in
the type of the central processing unit, amount of memory capacity, networking support,
and peripheral-device support.
This block diagram describes Model B and B+; Model A, A+, and the Pi Zero are similar, but
lack the Ethernet and USB hub components. The Ethernet adapter is internally connected to
an additional USB port. In Model A, A+, and the Pi Zero, the USB port is connected directly
to the system on a chip (SoC). On the Pi 1 Model B+ and later models the USB/Ethernet chip
contains a five-port USB hub, of which four ports are available, while the Pi 1 Model B only
provides two. On the Pi Zero, the USB port is also connected directly to the SoC, but it uses
a micro USB (OTG) port. Unlike all other Pi models, the 40 pin GPIO connector is omitted on
the Pi Zero, with solderable through-holes only in the pin locations. The Pi Zero WH
remedies this.
Processor speed ranges from 700 MHz to 1.4 GHz for the Pi 3 Model B+ or 1.5 GHz for the Pi
4; on-board memory ranges from 256 MiB to 1 GiB random-access memory (RAM), with up
to 8 GiB available on the Pi 4. Secure Digital (SD) cards in MicroSDHC form factor (SDHC on
early models) are used to store the operating system and program memory. The boards
have one to five USB ports. For video output, HDMI and composite video are supported,
with a standard 3.5 mm tip-ring-sleeve jack for audio output. Lower-level output is provided
by a number of GPIO pins, which support common protocols like I²C. The B-models have
an 8P8C Ethernet port and the Pi 3, Pi 4 and Pi Zero W have on-board Wi-
Fi 802.11n and Bluetooth.
Since the inception of Raspberry, the company sold out more than 8 million items.
Raspberry Pi 3 is the latest version and it is the first 64-bit computing board that also comes
with built-in Wi-Fi and Bluetooth functions. According to Raspberry Pi Foundation CEO Eben
Upton, "it's been a year in the making". The Pi3 version is replaced with a quad-core 64-bit
GHz ARM Cortex A53 chip, 1GB of RAM, VideoCore IV graphics, Bluetooth 4.1 and
802.11n Wi-Fi. The developers claim the new architecture delivers an average 50%
performance improvement over the Pi 2.
Raspberry Pi uses Linux as its default operating system (OS). It’s also fully Android
compatible. Using the system on Windows OS is enabled through any virtualization system
like XenDesktop. If you want to develop an application for Raspberry Pi on your computer, it
is necessary to download a specific toolset comprised of ARM-compiler and some libraries
complied down to ARM-target platform like glibc.
DATA ANALYTICS AND SUPPORTING SERVICES
Traditional data management systems are simply unprepared for the demands
of what has come to be known as “big data.” As discussed throughout this book, the
real value of IoT is not just in connecting things but rather in the data produced by
those things, the new services you can enable via those connected things, and the
business insights that the data can reveal. However, to be useful, the data needs to be
handled in a way that is organized and controlled. Thus, a new approach to data
analytics is needed for the Internet of Things.
In the world of IoT, the creation of massive amounts of data from sensors is
common and one of the biggest challenges—not only from a transport perspective but
also from a data management standpoint. A great example of the deluge of data that
can be generated by IoT is found in the commercial aviation industry and the sensors
that are deployed throughout an aircraft. Modern jet engines are fitted with thousands
of sensors that generate a whopping 10GB of data per second.
For example, modern jet engines, similar to the one shown in Figure 1, may be
equipped with around 5000 sensors. Therefore, a twin engine commercial aircraft with
these engines operating on average 8 hours a day will generate over 500 TB of data
daily, and this is just the data from the engines! Aircraft today have thousands of other
sensors connected to the airframe and other systems. In fact, a single wing of a modern
jumbo jet is equipped with 10,000 sensors.
The potential for a petabyte (PB) of data per day per commercial airplane is not
farfetched—and this is just for one airplane. Across the world, there are approximately
100,000 commercial flights per day. The amount of IoT data coming just from the
commercial airline business is overwhelming. This example is but one of many that
highlight the big data problem that is being exacerbated by IoT. Analyzing this amount
of data in the most efficient manner possible falls under the umbrella of data analytics.
Data analytics must be able to offer actionable insights and knowledge from data, no
matter the amount or style, in a timely manner, or the full benefits of IoT cannot be
realized.
Before diving deeper into data analytics, it is important to define a few key
concepts related to data. For one thing, not all data is the same; it can be categorized
and thus analyzed in different ways. Depending on how data is categorized, various
data analytics tools and processing methods can be applied. Two important
categorizations from an IoT perspective are whether the data is structured or
unstructured and whether it is in motion or at rest.
Structured data and unstructured data are important classifications as they typically
require different toolsets from a data analytics perspective. Figure 2 provides a high-
level comparison of structured data and unstructured data.
Structured data means that the data follows a model or schema that defines how the
data is represented or organized, meaning it fits well with a traditional relational
database management system (RDBMS). Inmany cases you will find structured data in a
simple tabular form—for example, a spreadsheet where data occupies a specific cell
and can be explicitly defined and referenced. Structured data can be found in most
computing systems and includes everything from banking transaction and invoices to
computer log files and router configurations. IoT sensor data often uses structured
values, such as temperature, pressure, humidity, and so on, which are all sent in a
known format. Structured data is easily formatted, stored, queried, and processed; for
these reasons, it has been the core type of data used for making business decisions.
Because of the highly organizational format of structured data, a wide array of data
analytics tools are readily available for processing this type of data. From custom scripts
to commercial software like Microsoft Excel and Tableau, most people are familiar and
comfortable with working with structured data. Unstructured data lacks a logical
schema for understanding and decoding the data through traditional programming
means. Examples of this data type include text, speech, images, and video. As a general
rule, any data that does not fit neatly into a predefined data model is classified as
unstructured data. According to some estimates, around 80% of a business’s data is
unstructured. Because of this fact, data analytics methods that can be applied to
unstructured data, such as cognitive computing and machine learning, are deservedly
garnering a lot of attention. With machine learning applications, such as natural
language processing (NLP), you can decode speech. With image/facial recognition
applications, you can extract critical information from still images and video. The
handling of unstructured IoT data employing machine learning techniques is covered in
more depth later.
Smart objects in IoT networks generate both structured and unstructured data.
Structured data is more easily managed and processed due to its well-defined
organization. On the other hand, unstructured data can be harder to deal with and
typically requires very different analytics tools for processing the data. Being familiar
with both of these data classifications is important because knowing which data
classification you are working with makes integrating with the appropriate data
analytics solution much easier.
As in most networks, data in IoT networks is either in transit (“data in motion”) or being
held or stored (“data at rest”). Examples of data in motion include traditional
client/server exchanges, such as web browsing and file transfers, and email. Data saved
to a hard drive, storage array, or USB drive is data at rest. From an IoT perspective, the
data from smart objects is considered data in motion as it passes through the network
en route to its final destination. This is often processed at the edge, using fog
computing. When data is processed at the edge, it may be filtered and deleted or
forwarded on for further processing and possible storage at a fog node or in the data
center. Data does not come to rest at the edge.
When data arrives at the data center, it is possible to process it in real-time, just like at
the edge, while it is still in motion. Tools with this sort of capability, such as Spark,
Storm, and Flink, are relatively nascent compared to the tools for analyzing stored data.
Later sections of this chapter provide more information on these real-time streaming
analysis tools that are part of the Hadoop ecosystem. Data at rest in IoT networks can
be typically found in IoT brokers or in some sort of storage array at the data center.
Myriad tools, especially tools for structured data in relational databases, are available
from a data analytics perspective. The best known of these tools is Hadoop. Hadoop not
only helps with data processing but also data storage. It is discussed in more detail
later.
Descriptive: Descriptive data analysis tells you what is happening, either now or in the
past. For example, a thermometer in a truck engine reports temperature values every
second. From a descriptive analysis perspective, you can pull this data at any moment
to gain insight into the current operating condition of the truck engine. If the
temperature value is too high, then there may be a cooling problem or the engine may
be experiencing too much load.
Diagnostic: When you are interested in the “why,” diagnostic data analysis can provide
the answer. Continuing with the example of the temperature sensor in the truck engine,
you might wonder why the truck engine failed. Diagnostic analysis might show that the
temperature of the engine was too high, and the engine overheated. Applying
diagnostic analysis across the data generated by a wide range of smart objects can
provide a clear picture of why a problem or an event occurred.
Predictive: Predictive analysis aims to foretell problems or issues before they occur. For
example, with historical values of temperatures for the truck engine, predictive analysis
could provide an estimate on the remaining life of certain components in the engine.
These components could then be proactively replaced before failure occurs. Or perhaps
if temperature values of the truck engine start to rise slowly over time, this could
indicate the need for an oil change or some other sort of engine cooling maintenance.
As IoT has grown and evolved, it has become clear that traditional data analytics
solutions were not always adequate. For example, traditional data analytics typically
employs a standard RDBMS and corresponding tools, but the world of IoT is much more
demanding. While relational databases are still used for certain data types and
applications, they often struggle with the nature of IoT data. IoT data places two
specific challenges on a relational database:
Scaling problems: Due to the large number of smart objects in most IoT networks that
continually send data, relational databases can grow incredibly large very quickly. This
can result in performance issues that can be costly to resolve, often requiring more
hardware and architecture changes.
Volatility of data: With relational databases, it is critical that the schema be designed
correctly from the beginning. Changing it later can slow or stop the database from
operating. Due to the lack of flexibility, revisions to the schema must be kept at a
minimum. IoT data, however, is volatile in the sense that the data model is likely to
change and evolve over time. A dynamic schema is often required so that data model
changes can be made daily or even hourly.
To deal with challenges like scaling and data volatility, a different type of database,
known as NoSQL, is being used. Structured Query Language (SQL) is the computer
language used to communicate with an RDBMS. As the name implies, a NoSQL database
is a database that does not use SQL. It is not set up in the traditional tabular form of a
relational database. NoSQL databases do not enforce a strict schema, and they support
a complex, evolving data model. These databases are also inherently much more
scalable.
In addition to the relational database challenges that IoT imposes, with its high volume
of smart object data that frequently changes, IoT also brings challenges with the live
streaming nature of its data and with managing data at the network level. Streaming
data, which is generated as smart objects transmit data, is challenging because it is
usually of a very high volume, and it is valuable only if it is possible to analyze and
respond to it in real-time. Real-time analysis of streaming data allows you to detect
patterns or anomalies that could indicate a problem or a situation that needs some kind
of immediate response. To have a chance of affecting the outcome of this problem, you
naturally must be able to filter and analyze the data while it is occurring, as close to the
edge as possible. The market for analyzing streaming data in real-time is growing fast.
Major cloud analytics providers, such as Google, Microsoft, and IBM, have streaming
analytics offerings, and various other applications can be used in house. (Edge
streaming analytics is discussed in depth later in this chapter.) Another challenge that
IoT brings to analytics is in the area of network data, which is referred to as network
analytics. With the large numbers of smart objects in IoT networks that are
communicating and streaming data, it can be challenging to ensure that these data
flows are effectively managed, monitored, and secure. Network analytics tools such as
Flexible NetFlow and IPFIX provide the capability to detect irregular patterns or other
problems in the flow of IoT data through a network. Network analytics, including both
Flexible NetFlow and IPFIX, is covered in more detail.
Data acquiring
Let us first discuss the following terms and their meanings used in IoT application layers.
Service denotes a mechanism, which enables the provisioning of access to one or more
capabilities. An interface for the service provides the access to capabilities. The access
to each capability is consistent with constraints and policies, which a service-description
specifies. Examples of service capabilities are automotive maintenance service
capabilities or service capabilities for the Automatic Chocolate Vending Machines
(ACVMs) for timely filling of chocolates into the machines.
Service consists of a set of related software components and their functionalities. The
set is reused for one or more purposes. Usage of the set is consistent with the controls,
constraints and policies which are specified in the service description for each service. A
service also associates a Service Level Agreement (SLA).
Operation means action or set of actions. For example, actions during a bank
transaction.
Query is a command for getting select values from a database which in return transfers
the answer to the query after its processing. A query example is command to ACVMs
database for providing sales data of ACVMs on Sundays near city gardens in a specific
festival period in a year. Another example is query to service center database for
providing the list of automobile components needing replacement that have completed
expected service-life in a specific vehicle.
Query Processing is a group of structured activities undertaken to get the results from a
data store as per the query
Key Value Pair (KVP) refers to a set of two linked entities, one is the key, which is a
unique identifier for a linked entity and the other is the value, which is either the entity
that is identified or a pointer to the location of that entity. A KVP example is birthday-
date pair. KVP is birthday: July 17, 2000. Birthday is the key for a table and date July 17,
2000 is the value. KVP applications create the look-up tables, hash tables and the
network or device configuration files.
Hash Table (also called hash map) refers to a data structure which maps the KVPs and is
used to implement an associative array (for example array of KVPs). A hash table may
use an index (key) which is computed using a hash function and key maps to the value.
Index is used to get or point to the desired value.
Bigtable maps two arbitrary string values into an associated arbitrary byte array. One is
used as row key and the other as column key. Time stamp associates in three-
dimensional mapping. Mapping is unlike a relational database but can be considered as
a sparse, distributed multi-dimensional sorted map. The table can scale up to 100s to
1000s of distributed computing nodes with ease of adding more nodes.
Process Matrix is a multi-element entity, each element of which relates a set of data or
inputs to an activity (or subset of activities).
Business Intelligence (BI) is a process which enables a business service to extract new
facts and knowledge, and then undertake better decisions. These new facts and
knowledge follow from earlier results of data processing, aggregation and analysis of
these results.
Data Acquiring
1. Data Generation
Data generates at devices that later on, transfers to the Internet through a gateway.
Data generates as follows:
Passive devices data: Data generate at the device or system, following the result of
interactions. A passive device does not have its own power source. An external source
helps such a device to generate and send data. Examples are an RFID or an ATM debit
card. The device may or may not have an associated microcontroller, memory and
transceiver. A contactless card is an example of the former and a label or barcode is the
example of the latter.
Active devices data: Data generates at the device or system or following the result of
interactions. An active device has its own power source. Examples are active RFID,
streetlight sensor or wireless sensor node. An active device also has an associated
microcontroller, memory and transceiver.
Event data: A device can generate data on an event only once. For example, on
detection of the traffic or on dark ambient conditions, which signals the event. The
event on darkness communicates a need for lighting up a group of streetlights (Example
1.2). A system consisting of security cameras can generate data on an event of security
breach or on detection of an intrusion. A waste container with associate circuit can
generate data in the event of getting it filled up 90% or above. The components and
devices in an automobile generate data of their performance and functioning. For
example, on wearing out of a brake lining, a play in steering wheel and reduced air-
conditioning is felt. The data communicates to the Internet. The communication takes
place as and when the automobile reaches near a Wi-Fi access point.
Device real-time data: An ATM generates data and communicates it to the server
instantaneously through the Internet. This initiates and enables Online Transactions
Processing (OLTP) in real time.
Event-driven device data: A device data can generate on an event only once. Examples
are: (i) a device receive command from Controller or Monitor, and then performs
action(s) using an actuator. When the action completes, then the device sends an
acknowledgement; (ii) when an application seeks the status of a device, then the device
communicates the status.
2. Data acquisition
Data acquisition means acquiring data from IoT or M2M devices. The data
communicates after the interactions with a data acquisition system (application). The
application interacts and communicates with a number of devices for acquiring the
needed data. The devices send data on demand or at programmed intervals. Data of
devices communicate using the network, transport and security layers.
An application can configure the devices for the data when devices have configuration
capability. For example, the system can configure devices to send data at defined
periodic intervals. Each device configuration controls the frequency of data generation.
For example, system can configure an umbrella device to acquire weather data from
the Internet weather service, once each working day in a week. An ACVM can be
configured to communicate the sales data of machine and other information, every
hour. The ACVM system can be configured to communicate instantaneously in event of
fault or in case requirement of a specific chocolate flavour needs the Fill service.
Application can configure sending of data after filtering or enriching at the gateway at
the data-adaptation layer. The gateway in-between application and the devices can
provision for one or more of the following functions—transcoding, data management
and device management. Data management may be provisioning of the privacy and
security, and data integration, compaction and fusion
3. Data validation
Data acquired from the devices does not mean that data are correct, meaningful or
consistent. Data consistency means within expected range data or as per pattern or
data not corrupted during transmission. Therefore, data needs validation checks. Data
validation software do the validation checks on the acquired data. Validation software
applies logic, rules and semantic annotations. The applications or services depend on
valid data. Then only the analytics, predictions, prescriptions, diagnosis and decisions
can be acceptable.
Large magnitude of data is acquired from a large number of devices, especially, from
machines in industrial plants or embedded components data from large number of
automobiles or health devices in ICUs or wireless sensor networks, and so on.
Validation software, therefore, consumes significant resources. An appropriate strategy
needs to be adopted. For example, the adopted strategy may be filtering out the invalid
data at the gateway or at device itself or controlling the frequency of acquiring or
cyclically scheduling the set of devices in industrial systems. Data enriches, aggregates,
fuses or compacts at the adaptation layer.
Services, business processes and business intelligence use data. Valid, useful and
relevant data can be categorized into three categories for storage—data alone, data as
well as results of processing, only the results of data analytics are stored. Following are
three cases for storage:
a. Data which needs to be repeatedly processed, referenced or audited in future, and
therefore, data alone needs to be stored.
b. Data which needs processing only once, and the results are used at a later time
using the analytics, and both the data and results of processing and analytics are
stored. Advantages of this case are quick visualization and reports generation
without reprocessing. Also the data is available for reference or auditing in future.
c. Online, real-time or streaming data need to be processed and the results of this
processing and analysis need storage.
Data from large number of devices and sources categorizes into a fourth category called
Big data. Data is stored in databases at a server or in a data warehouse or on a Cloud as
Big data.
A device can generate events. For example, a sensor can generate an event when
temperature reaches a preset value or falls below a threshold. A pressure sensor in a
boiler generates an event when pressure exceeds a critical value which warrants
attention. Each event can be assigned an ID. A logic value sets or resets for an event
state. Logic 1 refers to an event generated but not yet acted upon. Logic 0 refers to an
event generated and acted upon or not yet generated. A software component in
applications can assemble the events (logic value, event ID and device ID) and can also
add Date time stamp. Events from IoTs and logic-flows assemble using software.
6. Data store
A data store is a data repository of a set of objects which integrate into the store.
Features of data store are:
Objects in a data-store are modeled using Classes which are defined by the database
schemas
A data store may be distributed over multiple nodes. Apache Cassandra is an example
of distributed data store.
A data store may consist of multiple schemas or may consist of data in only one
scheme. Example of only one scheme data store is a relational database.
Repository in English means a group, which can be related upon to look for required
things, for special information or knowledge. For example, a repository of paintings of
artists. A database is a repository of data which can be relied upon for reporting,
analytics, process, knowledge discovery and intelligence. A flat file is another
repository.
A data centre is a facility which has multiple banks of computers, servers, large memory
systems, high speed network and Internet connectivity. The centre provides data
security and protection using advanced tools, full data backups along with data
recovery, redundant data communication connections and full system power as well as
electricity supply backups.
Large industrial units, banks, railways, airlines and units for whom data are the critical
components use the services of data centres. Data centres also possess a dust free,
heating, ventilation and air conditioning (HVAC), cooling, humidification and
dehumidification equipment, pressurisation system with a physically highly secure
environment. The manager of data centre is responsible for all technical and IT issues,
operations of computers and servers, data entries, data security, data quality control,
network quality control and the management of the services and applications used for
data processing.
8. Server Management
Server management means managing services, setup and maintenance of systems of all
types associated with the server. A server needs to serve around the clock. Server
management includes managing the following:
● Optimised performance
● High degree of security and integrity and effective protection of data, files and
databases at the organisation
Consider goods with RFID tags. When goods move from one place to another, the IDs of
goods as well as locations are needed in tracking or inventory control applications.
Spatial storage is storage as spatial database which is optimised to store and later on
receives queries from the applications. Suppose a digital map is required for parking
slots in a city. Spatial data refers to data which represents objects defined in a
geometric space. Points, lines and polygons are common geometric objects which can
be represented in spatial databases. Spatial database can also represent database for
3D objects, topological coverage, linear networks, triangular irregular networks and
other complex structures. Additional functionality in spatial databases enables efficient
processing. Internet communication by RFIDs, ATMs, vehicles, ambulances, traffic lights,
streetlights, waste containers are examples of where spatial database are used.
Spatial database functions optimally for spatial queries. A spatial database can perform
typical SQL queries, such as select statements and performs a wide variety of spatial
operations. Spatial database has the following features:
● Can perform observer functions using queries which replies specific spatial
information such as location of the centre of a geometric object
● Can change the existing features to new ones using spatial functions and can
predicate spatial relationships between geometries using true or false type queries.
A few conventional methods for data collection and storage are as follows:
● Communicating and saving the devices’ data in the files locally on removable media,
such as micro SD cards and computer hard disks
● Communicating and saving the data and results of computations in a dedicated data
store or coordinating node locally
● Communicating and saving data at a local node, which is a part of a distributed DBMS
Cloud is a new generation method for data collection, storage and computing. cloud
computing paradigm for data collection, storage, computing and services. describes
cloud-computing service models in a software architectural concept, ‘everything as a
service’.Describes IoT specific cloud based services, Xively, Nimbits. describes platforms
such as AWS IoT, Cisco IoT, IOx and Fog, IBM IoT Foundation, TCS Connected Universe
(TCS CUP).
Different methods of data collection, storage and computing are shown in Figure 6.1.
The figure shows (i) Devices or sensor networks data collection at the device web
server, (ii) Local files, (iii) Dedicated data store at coordinating node, (iii) Local node in a
distributed DBMS, (iv) Internet-connected data centre, (v) Internet-connected server,
(vi) Internet-connected distributed DBMS nodes, and (vii) Cloud infrastructure and
services.
Following are the key terms and their meanings, which need to be understood before
learning about the cloud computing platform.
Resource refers to one that can be read (used), written (created of changed) or
executed (processed). A path specification is also a resource. The resource is atomic
(not-further divisible) information, which is usable during computations. A resource
may have multiple instances or just a single instance. The data point, pointer, data,
object, data store or method can also be a resource.
Devices or sensors network data collection at a device local-server, local files,
dedicated data store, at a coordinating node, a local node of a distributed DBMS,
Internet-connected server of data centre, server or distributed database nodes or a
cloud infrastructure
System resource refers to an operating system (OS), memory, network, server, software or
application. Environment refers to an environment for programming, program execution or
both. For example, cloud9 online provides an open programming environment for BeagleBone
board for the development of IoT devices; Windows environment for programming and
execution of applications; Google App Engine environment for creation and execution of web
applications in Python or Java. Platform denotes the basic hardware, operating system and
network, and is used for software applications or services over which programs can be run or
developed.
A platform may provide a browser and APIs which can be used as a base on which other
applications can be run or developed.
Edge computing is a type of computing that pushes the frontier of computing applications,
data and services away from centralised nodes to IoT data generating nodes, that means at
logical extremes of the network.2 IoT device nodes are pushed by events, triggers, alerts,
messages and data is collected for enrichment, storage and computations from the remote
centralised database nodes. Pushing the computations from centralised nodes enables the
usage of resources at device nodes, which could be a requirement in case of low power lossy
networks. The processing can also be classified as edge computing at local cloud, grid or mesh
computing. The nodes may be mobile or of a wireless sensor network or cooperative
distributed in peer-to-peer and ad-hoc networks.
Distributed computing refers to computing and usage of resources which are distributed at
multiple computing environments over the Internet. The resources are logically-related, which
means communicating among themselves using message passing and transparency concepts,
and are cooperating with each other, movable without affecting the computations and can be
considered as one computing system (location independent).
Service is a software which provides the capabilities and logically grouped and encapsulated
functionalities. A service is called by an application for utilising the capabilities. A service has a
description and discovery methods, such as advertisement for direct use or through a service
broker. The service binds to Service Level Agreement (SLA) between service (provider end
point) and application (end point). One service can also use another service.
Web Service, according to the W3C definition, is an application identified by a URI, described
and discovered using the XML based Web-Service Description Language (WSDL). A web service
interacts with other services and applications using XML messages and exchanges the objects
using Internet protocols.
Grid computing refers to computing using the pooled interconnected grid of computing
resources and environments in place of web servers.
Utility computing refers to computing using focus on service levels with optimum amount of
resources allotted when required and takes the help of pooled resources and environments
for hosting applications. The applications utilise the services.
Cloud computing refers to computing using a collection of services available over the Internet
that deliver computational functionality on the infrastructure of a service provider for
connected systems and enables distributed grid and utility computing.
Key Performance Indicator (KPI) refers to a set of values, usually consisting of one or more raw
monitored values including minimum, average and maximum values specifying the scale. A
service is expected to be fast, reliable and secure. The KPIs monitor the fulfillment of these
objectives. For example, a set of values can relate to Quality of Service (QoS) characteristics,
such as bandwidth availability, data backup capability, peak and average workload handling
capacity, ability to handle defined volume of demand at different times of the day, and the
ability to deliver defined total volume of service. A cloud service should be able to fulfill the
defined minimum, average and maximum KPI values agreed in the SLA
Seamless cloud computing means during computing the content usages and computations
continue without any break when the service usage moves to a location with similar QoS level
and KPIs. For example, continue using same cloud platform when developer of software shifts.
Elasticity denotes that an application can deploy local as well as remote applications or
services and release them after the application usage. The user incurs the costs as per the
usages and KPIs.
Measurability (of a resource or service) is something which can be measured for controlling or
monitoring and enables report of the delivery of resource or service.
Scalability in cloud services refers to the ability using which an application can deploy smaller
local resources as well as remotely distributed servers and resources, and then increase or
decrease the usage, while incurring the cost as per the usage on increasing scales.
Cloud computing means a collection of services available over the Internet. Cloud delivers the
computational functionality. Cloud computing deploys infrastructure of a cloud-service
provider. The infrastructure deploys on a utility or grid computing or webservices
environment that includes network, system, grid of computers or servers or data centres. Just
as we—users of electricity—do not need to know about the source and underlying
infrastructure for electricity supply service, similarly, a user of computing service or
application need not know how the infrastructure deploys or the details of the computing
environment. Just as the user does not need to know Intel processor inside a computer,
similarly, the user uses the data, computing and intelligence in the cloud, as part of the
services. Similarly, the services are used as a utility at the cloud.
● Infrastructure for large data storage of devices, RFIDs, industrial plant machines,
automobiles and device networks
Cloud platform usages are for connecting devices, data, APIs, applications and services,
persons, enterprises, businesses and XAAS.
Internet Cloud + Clients = User applications and services with ‘no boundaries and no walls’
An application or service executes on a platform which includes the operating system (OS),
hardware and network. Multiple applications may initially be designed to run on diversified
platforms (OSs, hardware and networks). Applications and services need to integrate them on
a common platform and running environment. Cloud storage and computing environment
offers a virtualized environment, which refers to a running environment made to appear as
one to all applications and services, but in fact physically two or more running environments
and platforms may be present.
Virtualization
Applications need not be aware of the platform, just Internet connectivity to the platform,
called cloud platform, is required. The storage is called cloud storage. The computing is called
cloud computing. The services are called cloud services in line with the web services which
host on web servers.
Virtualization of storage means user application or service accesses physical storage using
abstract database interface or file system or logical drive or disk drive, though in fact storage
may be accessible using multiple interfaces or servers. For example, Apple iCloud offers
storage to a user or user group that enables the sharing of albums, music, videos, data store,
editing files and collaboration among the user group members.
Network Function Virtualisation (NFV) means a user application or service accesses the
resources appearing as just one network, though the network access to the resources may be
through multiple resources and networks.
Virtualisation of server means user application accesses not only one server but in fact
accesses multiple servers.
Virtualised desktop means the user application can change and deploy multiple desktops,
though the access by the user is through their own computer platform (OS) that in fact may be
through multiple OSs and platforms or remote computers
● On demand self-service to users for the provision of storage, computing servers, software
delivery and server time
● Elasticity
● Scalability
● Maintainability
● Homogeneity
● Virtualisation
● Resilient computing
● Advanced security
● Low cost
Private cloud: This model is exclusive for use by institutions, industries, businesses or
enterprises and is meant for private use in the organisation by the employees and associated
users only.
. Community cloud: This model is exclusive for use by a community formed by institutions,
industries, businesses or enterprises, and for use within the community organisation,
employees and associated users. The community specifies security and compliance
considerations.
Hybrid cloud: A set of two or more distinct clouds (public, private or community) with distinct
data stores and applications that bind between them to deploy the proprietary or standard
technology.
Cloud connects the devices, data, applications, services, persons and business. Cloud services
can be considered as distribution service—a service for linking the resources (computing
functions, data store, processing functions, networks, servers and applications) and for
provision of coordinating between the resources.
SaaS means Software as a Service. The software is made available to an application or service
on demand. SaaS is a service model where the applications or services deploy and host at the
cloud, and are made available through the Internet on demand by the service user. The
software control, maintenance, updation to new version and infrastructure, platform and
resource requirements are the responsibilities of the cloud service provider.
IaaS means Infrastructure as a Service. The infrastructure (data stores, servers, data centres
and network) is made available to a user or developer of application on demand. Developer
installs the OS image, data store and application and controls them at the infrastructure. IaaS
is a service model where the applications develop or use the infrastructure which is made
available through the Internet on demand on rent (pay as per use in multi-tenancy model) by
a developer or user. IaaS computing systems, network and security are the responsibilities of
the cloud service provider.
DaaS means Data as a Service. Data at a data centre is made available to a user or developer
of application on demand. DaaS is a service model where the data store or data warehouse is
made available through the Internet on demand on rent (pay as per use in multi tenancy
model) to an enterprise. The data centre management, 24×7 power, control, network,
maintenance, scale up, data replicating and mirror nodes and systems as well as physical
security are the responsibilities of the data centre service provider.