Advanced Technologies For Industrial Applications-Springer (2023)
Advanced Technologies For Industrial Applications-Springer (2023)
Purva Joshi
Advanced
Technologies
for Industrial
Applications
Advanced Technologies for Industrial Applications
Rohit Thanki • Purva Joshi
Advanced Technologies
for Industrial Applications
Rohit Thanki Purva Joshi
Krian Software GmbH Department of Information Engineering
Wolfsburg, Germany University of Pisa
Pisa, Italy
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Switzerland
AG 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Switzerland AG
The registered company address is: Gewerbestrasse 11, 6330 Cham, Switzerland
We are honored to dedicate this book to our
family, gurus, and loved ones. You have been
our unwavering source of support,
inspiration, and blessing throughout our life.
Preface
Our modern society cannot ignore the influence of technology. We are immersed in
technology if we are at a desktop or laptop in the office, checking heart rates from
our smartwatches, playing with our iPhone, or talking to Alexa. Every industry is
being disrupted by technology, and we all know that. Even relatively new areas
such as the development of new tools and applications are being disrupted by the
next generation of technology, which moves incredibly fast. Moore’s law state that
transistors will continue to double on integrated circuits every year in the near future.
As technology advances, it becomes more powerful and faster while simultaneously
becoming more lightweight, and it is happening at an alarming rate.
In this book, we discussed a variety of technologies such as system identification,
signal processing, computer vision, and artificial intelligence and their usage in
industries. These technologies have great market values and significant influence
on human society. Various tools and applications have been developed using these
technologies to better human culture. During the pandemic, technologies became
critical assets for developing modern industrial applications. This book covers the
usage and importance of these technologies in various industrial applications. Also,
this book provides future technological tools which help in the development of a
variety of industrial applications.
In Chap. 1, basic information of various technologies which are used in industrial
applications. Chapter 2 addresses a basic concept of system identification and
its usage in various industries. In Chap. 3, we present a signal processing and
its applications in various areas such as broadcasting, defense, etc. Chapter 4
gives information regarding computer vision technology and its usage in various
industries. Furthermore, artificial intelligence technology along with its commercial
usage are provided in Chap. 5. Chapter 6 gives advanced technological tools based
on technology such as Internet of Health Things, autonomous robots, etc. The book
has following features:
• Describes basic terminologies of various technologies such as system identifica-
tion, signal processing, computer vision, and artificial intelligence
• Presents various technological tools for industrial applications
vii
viii Preface
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
2 System Identification and Its Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.1 What Is System Identification? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Parametric and Nonparametric System Identification . . . . . . . . . . . . . . . . . . 11
2.2.1 Parametric Model Estimation Method. . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Optimization for Time-Varying System. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3.1 Adaptive κ-Nearest Neighbor Method . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3.2 Robust Control Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Industrial Applications of Time-Varying System . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.1 Robotic-Based Automotive Industries. . . . . . . . . . . . . . . . . . . . . . . . . 15
2.4.2 Chemical Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.3 Communication and Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.4 Agriculture and Smart Farming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.4.5 Logistics and Storage Industries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3 Signal Processing and Its Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1 Basic of Signal Processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.1.1 Types of Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.1.2 Types of Different Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2 Transforms Used for Analysis of Signals and Systems . . . . . . . . . . . . . . . . 21
3.2.1 Laplace Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.2.2 Z-Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.2.3 Fourier Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.4 Wavelet Transform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.3 Designing of Discrete-Time Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.3.1 Finite Impulse Response (FIR) Filter . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.3.2 Infinite Impulse Response (IIR) Filter. . . . . . . . . . . . . . . . . . . . . . . . . 26
3.4 Industrial Applications of Signal Processing (SP). . . . . . . . . . . . . . . . . . . . . . 27
3.4.1 SP for Digital Front End and Radio Frequency . . . . . . . . . . . . . . . 27
3.4.2 Development of Chip for All DSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
ix
x Contents
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
Chapter 1
Introduction
Technology has become an integral part of our daily lives, and it’s hard to imagine a
world without it. Technological advancements have changed how we communicate,
work, and interact with the world around us, from smartphones to social media,
from artificial intelligence to the Internet of Things. Technology has revolutionized
nearly every industry, from healthcare to finance, education to transportation.
In this day and age, it’s important to understand the impact of technology on
our lives and society as a whole. With new technologies emerging daily, keeping up
with the latest advancements and understanding how they work can be challenging.
But by staying informed and aware of the benefits and challenges of technology, we
can make informed decisions about its use and create a better future for ourselves
and our communities. The technology discussed in this book will explore the
latest advancements, their potential applications, and the ethical considerations
surrounding their use. We’ll look at how technology has transformed industries,
from healthcare to finance, and we’ll consider the ways in which it is likely to
shape our world in the years to come. By the end of reading this book, you’ll better
understand the role technology plays in our lives and the importance of staying
informed about the latest advancements.
Technological advancements have brought about significant changes in various
industries, enabling them to operate more efficiently and effectively. The use of
technology has become a crucial aspect of modern-day industries, as it helps to
improve productivity, quality, and speed while reducing costs. From automation
to artificial intelligence, machine learning to the Internet of Things, industries
use the latest technologies to streamline operations and gain a competitive edge.
By leveraging these tools, industries can optimize their supply chains, manage
inventory more effectively, and monitor their production processes to ensure they
run efficiently.
This book will explore how industries use technology to transform operations and
achieve goals. We will look at the latest advancements in automation, robotics, and
other technologies and consider their potential applications in various industries,
signal processing is a powerful and versatile field of study that plays a critical
role in many areas of science and technology. By analyzing and manipulating
signals, signal processing allows us to extract information and make decisions that
can improve our lives and advance our understanding of the world around us. All
information regarding signal processing is covered in Chap. 3.
Image processing is a field of study that involves the analysis and manipulation
of digital images. Digital images are composed of pixels, each representing a single
point of color or intensity within the image. Image processing techniques can be
used to enhance or modify images, extract information from them, or perform other
tasks such as compression and transmission. Image processing has many applica-
tions in fields such as medicine, remote sensing, and computer vision. In medical
imaging, for example, image processing techniques can be used to enhance images
of the human body for diagnostic purposes. In remote sensing, image processing
can be used to analyze satellite imagery to monitor environmental changes or detect
objects on the ground. In computer vision, image processing enables machines to
“see” and interpret the visual world. Image processing techniques range from simple
operations such as resizing and cropping to more advanced methods such as image
segmentation, feature extraction, and machine learning. These techniques can be
applied to images from various sources, including digital cameras, medical imaging
equipment, and satellites. In summary, image processing is a powerful and versatile
field of study that allows us to analyze, modify, and extract information from digital
images. With applications in fields such as medicine, remote sensing, and computer
vision, image processing is essential for advancing our understanding of the world
around us. All information regarding image processing is covered in Chap. 4.
The field of artificial intelligence (AI) is one of the fastest-growing academic
fields as it aims to create machines that can perform tasks that normally require
human intelligence. This includes learning, problem-solving, decision-making, and
language understanding tasks. AI can transform many industries, from healthcare to
finance to transportation. AI aims to create machines that can learn and adapt to new
situations as humans do. This involves developing algorithms and models to analyze
large amounts of data and make predictions based on that data. Machine learning,
a subfield of AI, is compelling, allowing machines to learn from experience and
improve their performance over time. AI has many applications in various fields.
For example, AI can analyze patient data in healthcare to assist doctors in diagnosis
and treatment planning. In finance, AI can predict market trends and improve
investment strategies. In transportation, AI can be used to develop autonomous
vehicles that can navigate roads and traffic without human intervention. While AI
can potentially revolutionize many industries, it raises ethical and societal concerns.
For example, there are concerns about the impact of AI on employment, as machines
may replace human workers in specific jobs. There are also concerns about bias in
AI algorithms, which can lead to unfair treatment of certain groups. In summary,
artificial intelligence is a rapidly advancing field that can transform many industries.
By creating machines that can learn and adapt to new situations, AI has the potential
to improve our lives in countless ways. However, as with any new technology, there
1 Introduction 5
are also potential risks and ethical considerations that must be carefully considered.
All information regarding artificial intelligence is covered in Chap. 5.
Finally, Chap. 6 gives information on various advanced technologies and tools
such as the Internet of Things (IoT), robotics, human-machine interfaces (HMIs),
AI software, augmented and virtual reality, blockchain, and cybersecurity. Also,
this chapter covers open research problems in different domains such as machine
learning, biomedical imaging, robotics, natural language processing, and wireless
communications.
Chapter 2
System Identification and Its Applications
In the real world, different systems exist with different characteristics, such as time-
invariant and time-varying. The systems can be classified in different ways, such as
linear and nonlinear systems, time-variant and time-invariant systems, linear time-
variant and linear time-invariant systems, static and dynamic systems, causal and
non-causal systems, and stable and unstable systems.
• Linear and Nonlinear Systems: A system is linear if it satisfies the following
property (in Eq. 2.1), where input signals .A(t) and .B(t) while output signals
.C(t) and .D(t), respectively. Those systems that are not following this property
• Causal and Non-causal Systems: As static and dynamic systems are distin-
guished, causal systems are those whose outputs depend only on the present and
past state of inputs.
• Stable and Unstable Systems: When the outputs and inputs of a system are
bound, it is considered stable. An unstable system has a bounded input but an
unbounded output.
Various nonlinear, time-varying [18] (variant) systems exist in the real world.
The modeling of time-varying nonlinear characteristics and the nonparametric
tracking of these properties are well-known and play a significant role in system
identification. The weighted least squares method has been employed to identify
such systems. This study aims to monitor time-varying nonlinearities while balanc-
ing bias and variance using various estimation techniques [3]. Various estimation
methods [17] and regression techniques have been used to evaluate how to balance
bias and variance.
A modified self-tuning regulator with restricted flexibility has been successfully
applied in a large-scale chemical pilot plant. A least squares estimator with the
variable weighting of historical data is used in the new approach; at each step, a
weighting factor is selected to keep the estimator’s scalar measure of information
content constant. It is demonstrated that such a method enables the parameter
estimations to track gradual and abrupt changes in the plant dynamics for essentially
deterministic systems. This chapter has been divided into a few sections and
emphasizes the industrial application of time-varying nonparametric systems.
The nonlinear systems [7] are complex to control and not easy to operate.
However, if engineers search for the percentage of nonlinearities, solving the
constraint problems can be easy. System identification suggests that the parameter
estimation process will become considerably simpler when using the same type
of system, specifically for identifying nonlinear systems frequently encountered in
applications involving biological or chemical components.
Exercise 1 Let’s define one system which can be illustrated as:
It is assumed that the inputs must be measured by applying a square to the current
point.
n ∞
f (y) =
. αi (x) u2n (t)dt (2.5)
i=1 0
10 2 System Identification and Its Applications
Here, it is clear that .f (t) and .un (t) are only measurements and .α1 +α2 +.....+αn
are sum of the defined parameters. As a result, we cannot independently estimate
each parameter from the data.
The main problem in identifiability analysis is the subject of the uniqueness of
the estimates, which has garnered a lot of attention in the literature . The analysis
[19] is referred to as theoretical, structural, or deterministic identifiability analysis
when the query solely considers the situation where the experiment and model
structure, in theory, lead to unique parameter values and, therefore, without respect
to uncertainties. Most methods for this kind of analysis are only useful for issues
with few unknowns. Hence, they are not further studied here.
For instance, the order of data-driven models [10] is sometimes easy to under-
stand but complex to identify based on a percentage of nonlinearity. However,
block-oriented models [6] are usually solved using the kernel algorithm and
Hammerstein-Wiener models and also support vector machine schemes. The block-
oriented parametric [20] and nonparametric system identifications are mentioned in
the next section.
2.3 Optimization for Time-Varying System 11
Prior knowledge plays a crucial role in system identification, comprising the three
basic elements of prior knowledge, objectives, and data. It should be understood that
these organizations are not autonomous. Data is frequently gathered based on prior
system knowledge and modeling goals, resulting in an appropriate experimental
design. At the same time, observed data may also cause one to change one’s
objectives or even one’s prior understanding.
The model’s structure on physical laws and extra relationships with matching
physical parameters is a logical choice at that point, leading to a structure known
as a “white-box model.” However, if some of these characteristics are unknown
or uncertain and, for example, accurate forecasts must be made, the parameters
can be inferred from the data. These movable parameters are found in model
sets or “gray-box” models. In other situations, such as control applications, linear
models are typically adequate and won’t always refer to the process’s underlying
physical laws and relationships. These models are frequently referred to as “black-
box” models. Along with selecting the structure, we must also select the model
representation, such as the state space, impulse response, or differential equation
model representation, and the model parameterization, which pertains to selecting
the variable parameters [1].
The identification method, which numerically solves the parameter estimation
problem, must be chosen to quantify the fit between model output and observed
data. A criterion function must also be supplied. The model’s [16] suitability for
its intended use is then evaluated in a subsequent phase known as model validation
[2, 11]. If the model is deemed suitable at that point, it can be used; otherwise, the
method must be repeated, which is typically the case in practice.
Control theory has a subfield known as optimization for time-varying systems. This
subfield focuses on designing and implementing optimal control techniques for
time-varying systems whose parameter values, dynamics, or disturbances change
with time [5]. The objective of time-varying optimization is to design control
12 2 System Identification and Its Applications
n 2
n ∞
f (y) =
. αi (x) E un−i (t) − u0 (t) dt (2.9)
0
i=1 i=0
In this above algorithm, we are making the assumption that the system is
described by the differential equation .ẏ = f (x, u, t, α), where x is the state of
the system, u is the control input, t is time, and .α represents uncertain system
parameters. In other words, we are assuming that the system is described by the
differential equation. We also make the assumption that we possess a control rule
denoted by the notation .μ(x, t, α) that translates the current state of the system as
well as the current time into a control input.
The method employs a sliding mode control strategy to deal with uncertainty in
the system parameters. Every time step, we estimate the unknown parameters .α̂k
and measure the system state. Then, we compute the control input .uk + 1 for the
subsequent time step using the control rule .μ(xk , tk , α̂k).
After applying the control input to the system and measuring the output, we
calculate the error .ek between the desired output .rk and the actual output .yk . By
computing a sliding surface with the help of this mistake, we can update the control
law by applying the formula .Dμk = −kμ sk , where .kμ is a tuning parameter.
2.4 Industrial Applications of Time-Varying System 15
Until the required control performance is attained, the algorithm iterates continu-
ously, updating the control law and estimating the uncertain parameters at each time
step.
Systems that change with time are said to have time-varying properties or behaviors.
These systems are used in various industrial applications, and it is essential to
analyze and control them to improve and optimize them. These are some examples
of time-varying systems being used in the industrial environment:
Time-varying systems are widely used in the chemical industries, where they are
utilized to increase the quality of chemical production and optimize process control
to achieve maximum efficiency. Time-varying systems can be used to optimize each
stage of a chemical process, leading to greater yields, reduced waste, and enhanced
profitability. Chemical processes are frequently complex and involve a number of
phases. Many procedures (as mentioned below) in the chemical industry can be
utilized and handled by time-varying processes:
• Fault Detection and Diagnosis (FDD): FDD techniques can detect and diagnose
errors in real time, enabling fast corrective action and reducing downtime. FDD
techniques are also known as fault tree analysis (FTA). Fault detection and
diagnosis (FDD) can use time-varying system methodologies to better account
for changes in the process dynamics over time and increase the accuracy of
problem detection.
• Process Design and Optimization: Time-varying models of the process can be
used to simulate the behavior of the process under various operating conditions
to optimize the process design to achieve a desired performance objective, such
as maximum yield or minimum waste. This can be accomplished through the use
of time-varying models of the process.
Each node in the link-state routing (LSR) algorithm keeps a complete map of the
network architecture. Nodes frequently share details on their own local linkages
and the links of their neighbors. Each node builds a complete map of the network
architecture using this data, which it then utilizes to calculate the shortest path
between a source and a destination node. Data can be sent from one node in a
network to numerous nodes using the routing mechanism known as multicast.
Time-varying systems can monitor plant development, soil conditions, and envi-
ronmental elements including temperature, humidity, and light in smart agriculture.
Smart agriculture uses closed-loop control systems to maintain greenhouse temper-
atures. In a closed-loop control system, sensors report greenhouse temperature to
a controller. The controller controls the heating or cooling system to maintain the
setpoint. Because the greenhouse temperature changes over time, the controller must
adjust the heating or cooling system.
Time-varying systems can regulate greenhouse humidity, light, and temperature.
A closed-loop control system might measure greenhouse humidity and alter the
ventilation system to control air moisture.
References
Everything we use and rely on in our daily lives is enabled by signal processing.
Signal processing is a branch of electrical engineering that analyzes data gen-
erated using physical devices using various models and theories. It models and
analyzes data representations of physical events and data generated across multiple
disciplines. These devices include computers, radios, video devices, cellphones,
intelligent connected devices, and much more. Our modern world relies heavily
on signal processing. This field combines biotechnology, entertainment, and social
interaction. Our ability to communicate and share information is enhanced by it. We
live in a digital world thanks to signal processing.
Signal processing refers to any modification or analysis of a signal. These
processing techniques are used to improve the efficiency of the system. Signal
processing has applications in nearly every field of life. But, before we get into
that, let us define signal. A signal is an electrical impulse or a wave that carries
information. The electrical impulse refers to the changing currents, voltages,
or electromagnetic waves that transmit data at any point in electrical systems.
Examples of signals are speech, voice, video stream, and mobile phone signals. The
noise is also considered a signal, but the information carried by noise is unwanted;
that is why it is considered undesirable. Let us briefly go through its types.
This section briefly discusses basic information about signal processing; the mathe-
matics used in signal processing, particularly various transforms; etc.
Signal processing is classified into different categories based on the types of signals.
These categories are analog signal processing, digital signal processing, nonlinear
signal processing, and statistical signal processing. The information of basic types
of signals are as per below:
• Analog Signal Processing: A continuous signal has not been digitized. In this
case, its values are typically represented as a voltage, an electric current, or an
electrical charge around components. There are many applications in the real
world where analog signal processing is still relevant, and even when sampling
and discretizing signals for digital processing, it is still the first step.
• Digital Signal Processing: Signals that have been digitized and sampled dis-
cretely in time. Digital circuits such as specialized digital signal processors
(DSPs), FPGAs, or ARM chips perform processing to convert an analog signal
into a digital version [1]. In many applications, digital processing offers several
advantages over analog processing, such as error detection and correction, data
compression, and error correction in transmission. Digital wireless communica-
tion and navigation systems are also based on this technology.
• Nonlinear Signal Processing: Because linear methods and systems are easy to
interpret and implement, classical signal processing relies on linear methods
and techniques. Some applications, however, would benefit from nonlinear
processing methods being included in the methodology. Several nonlinear signal
processing methods have proven efficient in addressing real-world challenges,
including wavelet and filterbank denoising, sparse sampling, and fractional
processes.
• Statistical Signal Processing: Modeling the system under study is often benefi-
cial for many applications. However, unlike physical models such as a swinging
pendulum, it is impossible to predict the behavior of most signals of interest with
100% accuracy. It will be necessary to include as many “broad” properties as
possible, such as the variation and correlation structure, to develop a model for
such a signal. Mathematics and stochastic processes are best used to describe this
phenomenon. It is possible to express optimality criteria and evaluate achievable
performance using these models.
The systems can be classified in different ways such as linear and nonlinear
systems, time-variant and time-invariant systems, linear time-variant and linear
time-invariant systems, static and dynamic systems, causal and non-causal systems,
and stable and unstable systems. The system can be analyzed using various signals,
such as analog and digital.
3.2 Transforms Used for Analysis of Signals and Systems 21
An output signal is generated by transforming the original signal into the resulting
signal, which is a mathematical model. A complex operation can be decomposed
into a sequence of simpler ones, although it is often convenient to describe a
complicated operation using this concept. According to the Unified Signal Theory,
the output domain can differ from the input domain. Additionally, multidimensional
signals can be transformed between different dimensions. Various transforms such
as Laplace, Z, Fourier, and wavelet are used to analyze various kinds of systems
where inputs and outputs are some kinds of signals.
In 1980, Laplace proposed the Laplace transform (LT). This operator transforms
signals in the time domain into signals in a complex frequency domain called
the “S” domain. In this case, “S” represents the complex frequency domain, and
“s” represents the complex frequency variable. The complex frequency S can be
likewise defined as:
s = σ + jω
. (3.1)
3.2.2 Z-Transform
In mathematical terms, the Z-transform converts difference equations from the time
domain to the algebraic domain. Z-transforms are very useful tools for analyzing
linear shift-invariant systems (LSIs). Different equations are used to represent LSI
discrete-time systems. To solve these time-domain difference equations, the Z-
transform is first used to convert them into algebraic equations in the z-domain.
Then the algebraic equations are manipulated in the z-domain, and then the result
is converted back into the time domain by using the inverse Z-transform. There are
two types of Z-transform: unilateral (one-sided) and bilateral (two-sided).
Mathematically, if .x(n) is a discrete-time signal or sequence, then its bilateral or
two-sided Z-transform is defined as:
∞
Z[x(n)] = X(z) =
. x(n)z−n (3.2)
n=−∞
.z = r · ej ω (3.3)
One-sided or unilateral Z-transforms are very useful when dealing with causal
sequences. Furthermore, it is primarily used to solve differential equations with
initial conditions. It is called the region of convergence (ROC) of the Z-transform
.X(z) when the Z-transform of a discrete-time sequence .x(n) converges for a set of
points in the Z-plane. It is possible for the Z-transform to converge or not for any
discrete-time sequence. The sequence .x(n) has no Z-transform if the function .X(z)
does not converge in the Z-plane. The Z-transform has the following advantages:
3.2 Transforms Used for Analysis of Signals and Systems 23
N −1
X(k) =
. x(n) · e−2j π nk (3.5)
n=0
where .x(k) is the input signal in a time domain and .X(k) is the transformed signal
in the frequency domain.
Fourier transforms have the following properties:
• This is a linear transformation. In this example, we can calculate the Fourier
transform of the linear combination of a and b if .a(k) and .b(k) are two Fourier
transforms given by .A(k) and .B(k).
• Timeshift is one of its properties. As a result of the Fourier transform of x(t–a),
the magnitude of the spectrum is also shifted by the same amount as the shift in
the original function.
• It has the property of modulation. When a function is multiplied by another
function, it is modulated by that function.
• Parseval’s theorem is used in its formulation. Fourier transforms are unitary, so a
function .a(k) ’s square root equals its Fourier transform .A(k).
• It has the property of duality. The Fourier transform of .a(k) is .A(−k) if .a(k) has
the Fourier transform .A(k).
There are two types of Fourier transform: discrete-time or discrete Fourier
transform (DTFT or DFT) and fast Fourier transform (FFT).
24 3 Signal Processing and Its Applications
N −1
−j 2π kn
X[k] =
. x[n]e N (3.6)
n=0
where .X(k) is used to denote the Fourier transformed signal, x[n] is used to denote
the original signal, and N is the length of the sequence to be transformed.
The inverse DFT is defined as such:
N −1
1
.x[n] = X[k]WN−kn (3.7)
N
k=0
Fourier transforms can be generated more efficiently using the fast Fourier transform
(FFT). FFT’s main advantage is speed, which reduces the number of calculations
required to analyze a waveform. In addition, it is used to design electrical circuits,
solve differential equations, process signals, analyze signals, and filter images.
We can use wavelets to extract more useful information from any signal by trans-
forming it from one representation to another. It is known as a wavelet transform.
Wavelet transforms can be mathematically represented as convolutions between
wavelet functions and signals. Signals in time-frequency space can be analyzed
with the wavelet transform (WT) to reduce noise while preserving significant
components. Signal processing has benefited greatly from WT in the past 20 years.
3.3 Designing of Discrete-Time Systems 25
N
M
b(n) = −
. pk b(n − k) + qk a(n − k) (3.9)
k=1 k=0
Z-transforms and the rational system function also describe linear time-invariant
discrete-time systems as below:
M −k
k=1 qk z
.H (z) = N (3.10)
1+ −k
k=1 pk z
The general equation for the FIR filter can be given below:
M−1
b(n) =
. pk a(n − k) (3.11)
k=0
M−1
H (z) =
. pk z−k (3.12)
k=0
Further, the unit sample response of the FIR filter can be described below:
pn , 0 n M − 1
h(n) =
. (3.13)
0, otherwise
The length of the FIR filter is set to M. The direct method is a simple structure
used in the literature for implementing a FIR system [2]. The FIR filter can
be realized using different structures such as cascades, frequency sampling, and
lattices. The following are the primary advantages of FIR filters:
• A linear phase can be achieved by them.
• It is always stable with them.
• There is generally a linear approach to design.
• It is feasible to implement them in hardware efficiently.
• There is a finite duration to the filter startup transients.
A major disadvantage of FIR filters is that they typically require much higher
filter orders to achieve the same performance levels as IIR filters. Consequently,
these filters are often much slower than IIR filters with equal performance. The FIR
filter can be designed using various methods such as windowing, multiband with
transition bands, constrained least squares, arbitrary response, and raised cosine [3].
A system described by Eqs. 3.11 and 3.12 can be realized using an IIR system using
direct-form, cascade, lattice, and lattice-ladder structures similar to FIR filters. One
difference is that the IIR filter is realized parallel rather than serially in the FIR filter
[2]. An IIR filter is generally more cost-effective than a corresponding FIR filter
since it meets a set of specifications with a much lower filter order [4]. The IIR filter
can be designed using various methods such as analog prototyping, direct design,
3.4 Industrial Applications of Signal Processing (SP) 27
generalized Butterworth design, and parametric modeling [4]. The IIR filter types
such as classical IIR filters, Butterworth, Chebyshev Types I and II, elliptic, and
Bessel are available in the literature [4].
The IEEE Signal Processing Society’s Industry Digital Signal Processing (DSP)
Standing Committee (IDSP-SC) focuses on identifying and evaluating emerging
digital signal processing applications and technologies [5]. There are several signal
processing applications and technologies recommended by the committee, includ-
ing digital and software radio frequency (RF) processing, single-chip solutions,
nanoscale technology, cognitive and reconfigurable radar, the Internet of Things,
cloud computing, service computing, and new-generation TV (smart TV, 3D TV,
4K TV, UHD TV), and perception by autonomous systems [5].
Signal processing algorithms have been converted to silicon using three different
computing platforms: application-specific integrated circuits (ASICs), digital signal
processors (DSPs), and field-programmable gate arrays (FPGAs). A single appli-
cation device usually incorporates a variety of signal processing algorithms. This
suggests that this single application device needs different computing platforms/IC
chips, which is practically inefficient.
An ASIC-based solution’s power consumption and performance are excellent;
however, this solution cannot support multiple standards and applications. The
performance of digital processing systems is highly dependent on signal process-
ing algorithms, which cannot be upgraded using an ASIC-based solution. The
flexible nature of FPGA- and DSP-based solutions allows them to meet several
28 3 Signal Processing and Its Applications
standards (or models or applications) and support a wide range of signal processing
algorithms. However, FPGA/DSP-based solutions could be more efficient from
a power consumption and cost perspective. In some cases, these two solutions
can be combined and viewed as an accelerator-based platform, producing some
advantages in both performance and flexibility. This third solution, however, has a
significant problem in that it is difficult to program/port different algorithms into
its platform, primarily because its control units, computational units, data units,
and accelerators have heterogeneous interfaces. A single-chip solution is highly
desirable by combining power efficiency, cost reduction, time to market, flexibility,
and programming ability.
A wide range of devices and places are expected to become IP-enabled and be
integrated into the Internet soon. Various examples of intelligent objects include
mobile phones, personal health devices, appliances, home security, and entertain-
ment systems. In addition, there are RFID, industrial automation, smart metering,
and environmental monitoring systems. There are many benefits that the intelligent
Internet of Things can offer. These include environmental monitoring, energy
savings, intelligent transportation, more efficient factories, better logistics, smart
agriculture, food safety, and better healthcare.
The following related areas will greatly depend on signal processing technology
and practice: wireless embedded technology, ubiquitous information acquisition
and sensing, RFID algorithms and circuit integration, signal and data coding and
compression, security authentication, key management algorithms, and routing
algorithms. In the smart grid, a number of significant components are involved in the
signal processing process: bulk generation, transmission, distribution, customers,
operations, markets, and service providers. Three layers are included: a power
and energy layer, a communication layer, and an information technology/computer
layer. There is no doubt that signal processing will be primarily used in the second
layer, encompassing smart metering and its wireless communication architecture,
microcontrollers with ultralow power consumption, models for power grid data and
state estimation, and algorithms for fault detection, isolation, recovery, and load
balancing in real time.
services) without the need for end users to know the physical location or reconfigure
the systems for delivering the services from a business or information service stand-
point. Dynamic allocation of cloud resources is essential to maximize the system’s
performance. Therefore, designing and implementing dynamic resource allocation
algorithms will be a crucial signal processing topic in cloud computing. Among
the issues discussed are algorithms and real-time implementations for compression,
coding, storage, processing, security, privacy, IP management, communication,
streaming (ultrahigh bandwidth), modeling, and evaluating the quality of services
and experiences [5].
References
1. R.C. Gonzalez, R.E. Woods, Digital Image Processing (Pearson Education India, Upper Saddle
River, 2008)
2. J.G. Proakis, D.G. Manolakis, Digital Signal Processing: Principles, Algorithms, and Applica-
tions (Pearson Education India, Noida, 2007)
3. FIR Filter Design. Web Link: https://www.mathworks.com/help/signal/ug/fir-filter-design.html.
Last Access February 2023
4. IIR Filter Design. Web link: https://www.mathworks.com/help/signal/ug/iir-filter-design.html.
Last Access February 2023
5. F.L. Luo, W. Williams, R.M. Rao, R. Narasimha, M.J. Montpetit, Trends in signal processing
applications and industry technology [in the spotlight]. IEEE Signal Proc. Mag, 29(1), 184–174
(2011)
Chapter 4
Image Processing and Its Applications
Whenever we look at a digital image, we see many elements, each with a specific
location and value [1]. Pixels, picture elements, and image elements are examples of
these elements. Digital images are commonly represented by pixels. What happens
when we look at an object? The process begins with the eye capturing the object and
sending signals to the brain. The brain decodes these signals and obtains valuable
information. Image processing is the process of converting images into useful data.
We begin processing images as soon as we are born and continue doing so
until the end, which is an integral part of our lives. Therefore, combining the eye
and the brain creates the ultimate imaging system. In image processing, algorithms
are written to process images captured by a camera. Here the camera replaces the
eye, and the computer does the brain’s work. Image processing involves changing
the nature of an image to either (1) improve its visual information for human
interpretation or (2) make it more suitable for autonomous machine perception.
Today, image processing is used around the world. Image processing applications
can be classified based on the energy source used to generate images. The principal
energy source for images today is the electromagnetic energy spectrum, and other
energy sources may be acoustic, ultrasonic, and electronic [1]. Figure 4.1 shows the
electromagnetic energy spectrum. For example, the image generated by gamma-ray
is called gamma-ray imaging. The image created by an X-ray is called an X-ray
image. These images are widely used in medical science to inspect the human body.
Gamma-ray imaging is primarily used in nuclear medicine and astronomy.
A single image can be processed using image processing at one end and viewed
through computer vision at the other end. There are three basic types of image
processing:
• Low-Level Image Processing: Basic operations such as noise reduction, con-
trast enhancement, and sharpening are included in low-level image processing.
These processes use images as inputs and outputs.
• Medium-Level Image Processing: Object classification, image segmentation,
and description of objects presented in an image are operations included in
The image has only two intensity levels, such as 0 for black and 1 or 255 for white,
which is called a binary image. This image is widely used in image segmentation
and highlights certain regions in a color image. The examples of binary images are
shown in Fig. 4.2.
In our modern world, we are used to seeing RGB or colored images that are 16-bit
matrices. Each pixel can have 65,536 different colors. RGB refers to an image’s red,
green, and blue channels. We used to have images with a single channel up until now.
36 4 Image Processing and Its Applications
In other words, any value of a matrix could be defined by two coordinates. However,
to specify the value of a matrix element, we require three unique coordinates for
three equal-sized matrices (called channels), each with a value between 0 and 255.
When a pixel value is (0, 0, 0) in an RGB image, it is black. It is white when it
is (255, 255, 255). Any combination of numbers between those two can create all
the colors in nature. For example, (255, 0, 0) corresponds to red (since only the red
channel is active here). The colors (0, 255, 0) and (0, 0, 255) are green and blue,
respectively. The examples of grayscale images are shown in Fig. 4.4 [2].
4.2 Fundamental Steps of Image Processing 37
The acquired image is manipulated in this step to meet the specific requirements
of the task for which it is intended. Usually, these techniques highlight hidden
or significant details in an image, such as adjusting contrast and brightness. The
process of image enhancement is highly subjective.
During this step, colored images are processed, such as color correction or model-
ing.
A wavelet is a unit for representing images with different levels of resolution. For
data compression and pyramidal representation, images are subdivided into smaller
regions.
38 4 Image Processing and Its Applications
This step divides an image into different parts to simplify and/or make it easier to
analyze and interpret. As a result of image segmentation, computers can focus their
attention on the important parts of an image, thereby improving the performance of
automated systems.
This step of the image segmentation procedure involves determining whether the
segmented region should be displayed as a boundary or a complete region. The
purpose of the description is to extract attributes that provide some quantitative
information of interest or can be used to differentiate one class of objects from
another.
As soon as the objects have been segmented from an image, the automated system
needs to assign a label to the object that humans can use to understand what the
object is.
4.3 Image Processing Methods 39
In the spatial domain, images are represented by pixels. Spatial domain methods
process images directly based on pixel values. A general equation can be applied to
all spatial domain methods.
An input image is f (.x, y), a processed image is g (.x, y), and a processing
operation is P. Pixels (.x, y) within the neighborhood of (.x, y) are generally
considered neighborhood pixels. Each position in the sub-image is processed by
40 4 Image Processing and Its Applications
.S = P (R) (4.2)
In Eq. (4.2), S is the gray level of the processed image g (.x, y). R represents
the gray level of the original image f (.x, y). The common operations for
this processing are identity transformation, image negative, contrast stretching,
contrast thresholding, gray-level slicing, bit plane slicing, log transformation,
power law transformation, and histogram processing [1].
• Neighborhood Processing: Neighborhood processing extends level transforma-
tion by applying an operating function to a neighborhood pixel of every target
pixel. The mask process is used in this process. This method will create a new
image with pixel values based on the gray-level values under the mask. Figure 4.6
shows this process.
4.3 Image Processing Methods 41
Images obtained through the acquisition process are not the exact same information
as represented objects in the image, but there is some degradation in the acquired
images. During the acquisition process of an image, many sensors or devices can
cause degradation. For example, in remote sensing and astronomy, images are
degraded due to various atmospheric conditions, various lighting conditions in
space, and the camera position of satellites. In many applications, point degradations
(due to noise) and spatial degradations (due to blurring) are commonly used to
degrade images. Restoration of an image is restoring an original image from a
degraded one. Rotation appears similar to image enhancement by definition, but
there are some differences between the two processes.
• The image enhancement process is subjective, while the image restoration
process is objective.
• Image enhancement procedures utilize the psychophysical aspects of the human
visual system (HVS) to manipulate an image. To reconstruct the original image,
images are restored by modeling degradation and applying inverse processes.
• Quantitative parameters cannot be used to measure image enhancement. Quanti-
tative parameters can be used to measure image restoration.
• Contrast stretching is an example of image enhancement. Removing blur from
an image is an example of image restoration.
4.3 Image Processing Methods 43
Various image processing operations that deal with the shape of features in an
image are known as morphological image processing (or morphology) [1, 5]. A
binary image can be corrected with these operations by correcting the image’s
imperfections. Variously shaped structuring elements can extract shape features
(such as edges, holes, corners, and cracks) from an image. This process is used
in industrial computer vision applications, such as object recognition, image
segmentation, and defect detection. This process involves various operations, such
as erosion, dilation, opening, closing, etc., used to process images.
A digital image compression process reduces the amount of redundant and irrelevant
information in the image data to store or transmit it efficiently. Redundancy in
the image can be classified into coding redundancy, interpixel redundancy, and
psychovisual redundancy.
• Coding Redundancy: In images, a few bits are used to represent frequently
occurring information. An image is represented by its pixel values. It is called
a code when these symbols are used. Each pixel value in an image is assigned
a code word. Usually, look-up tables (LUTs) are used to implement this type of
code. Huffman codes and arithmetic coding are examples of image compression
methods that explore coding redundancy.
44 4 Image Processing and Its Applications
Registration of images involves transforming different sets of data into one coordi-
nate system. The data may be multiple photographs, data from various sensors, data
from multiple depths, or data from different viewpoints. The technology is used
in computer vision, medical imaging, automatic target recognition in the military,
and compiling and analyzing satellite images. This data must be registered to be
compared or integrated. Image can be registered using various methods such as
point matching, feature matching (e.g., scale-invariant feature transform (SIFT)),
etc.
4.4.1 Agriculture
4.4.2 Manufacturing
4.4.3 Automotive
Almost every automotive system has a camera and image processing system [7].
High-speed systems must detect micrometer variations from the target value to
achieve 100% quality control on the production line. Intelligent imaging systems
also provide insights into other fields, such as automobile driving, traffic control,
crash laboratories, and wind tunnels. With robust housings and electronics, high-
speed cameras can capture all possible angles inside and outside the crashed
vehicles, capturing the scene from every possible angle. The HD quality of the
images ensures that engineers can follow every detail of the deformation of
car bodies. Fast-moving manufacturing processes require high-speed cameras to
analyze faults in detail. A camera’s superiority is clear when it comes to high-
speed processes. Imaging systems are increasingly taking over quality and process
control in mainstream processing. Image processing ensures brilliant surfaces
around the clock, micrometer-accurate assembly tolerances, and defect-free circuits
on increasingly prevalent chips and microcontrollers.
The cost of errors is synonymous with the cost of production for automotive man-
ufacturers. Automotive materials are produced with glass-like transparency using
cameras and laser systems. Engine developers are gaining a deeper understanding of
the processes behind injection and combustion that are not visible to the naked eye.
Camera systems can see and analyze even the slightest turbulence in wind tunnels.
Colleagues use laser systems for interior design, tire development, and vehicle body
design to detect and assess vibrations and structure-borne sounds. It is not only
4.4 Industrial Applications of Image Processing 47
used for diagnosis but also for measuring and documenting the effectiveness of their
measures.
Fully automated processes become more flexible with robotics. Image processing
software calculates the location and plans based on the images captured by
cameras. A sensor cluster containing several cameras allows highly accurate 3D
coordinates to be determined for large objects. The Six Sigma approach to quality
management in the automotive industry matches 100% real-time inspection on the
production line. According to the control loop define-measure-analyze-improve-
control, manufacturers and major suppliers strive to achieve a zero defect objective.
This is made possible by camera systems combined with downstream analysis
software.
4.4.4 Healthcare
A robot uses images for certain robotic tasks. Imagery equipment and the necessary
programming and software can be available from robotics specialists to handle
visual input encountered by robots. Robots are taught to recognize and respond
to images as part of the programming and teaching process. Software suites are
available from some companies for direct installation on equipment, or you may
program your own. In robotics, a camera system is used for navigation as an example
of image processing. There are many ways to teach robots to follow lines, dots,
or other visual cues, such as lasers. Targets in the surrounding environment are
identified and tracked using a crude camera and image processing system. In a
factory, this can be helpful for automating processes like collecting and delivering
products by robots.
48 4 Image Processing and Its Applications
Several digital image processing applications are widely used in defense and
security, including small target detection and tracking, missile guidance, vehicle
navigation, wide-area surveillance, and automatic/aided target recognition [8]. In
defense and security applications, image processing can reduce the workload of
human analysts so that more image data can be collected in an ever-increasing
volume. Researchers who work on image processing also aim to develop algorithms
and approaches to facilitate autonomous systems’ development. This will enable
them to make decisions and take action based on input from all sensors.
References
1. R.C. Gonzalez, R.E. Woods, Digital Image Processing (Pearson Education India, Upper Saddle
River, 2008)
2. The University of South Carolina SIPI Image Database. http://sipi.usc.edu/database/database.
php. Last Access January 2023
3. R. Kundu, Image Processing: Techniques, Types, & Applications (2023). Weblink: https://www.
v7labs.com/blog/image-processing-guide. Last Access Jan 2023
4. R.C. Gonzalez, R.E. Woods, Digital Image Processing Using MATLAB (TATA McGraw-Hill
Education, New York, 2009)
5. N. Efford, Digital Image Processing: A Practical Introduction Using JAVA (Pearson Education,
London, 2000)
6. Application of Industrial Image Processing. Web link: https://www.vision.fraunhofer.de/en/
application-of-industrial-image-processing.html. Last Access Jan 2023
7. Image Processing in the Automotive Industry (2019). Web link: https://www.industr.com/en/
image-processing-in-the-automotive-industry-2356834. Last Access Jan 2023
8. E. Du, R. Ives, A. van Nevel, J.H. She, Advanced image processing for defense and security
applications. EURASIP J. Adv. Signal Proc. 2010(1), 1–1 (2011)
Chapter 5
Artificial Intelligence and Its Applications
This type of learning is often used using real-time applications and practical
approaches. The model learns information by analyzing the previous experiences
it has had with the information provided. This type of learning involves mapping an
input (x) to an output (y) by an algorithm that gives a mapping function (f) like this:
y = f (x)
. (5.1)
learning system finds its answer to input but does not provide the correct answer.
Association and clustering problems are typically solved using algorithms based on
unsupervised learning.
The details of different machine learning algorithms are covered in this section.
52 5 Artificial Intelligence and Its Applications
For the testing of this algorithm, prior knowledge of the dataset is essentially
necessary. It is the analyst’s responsibility to gather this dataset knowledge. Here
are the steps in this algorithm [7]:
• For each input data class, identify the training areas.
• It identifies the data’s mean, variance, and covariance.
• The data is then classified.
• Finally, the input class has been mapped.
These algorithms have the advantage of detecting and correcting errors dur-
ing evaluation. Time-consuming and costly are the main disadvantages of these
algorithms. Additionally, the researcher, scientist, or analyst may not consider all
conditions affecting the dataset’s quality when selecting a training dataset. This led
to human error in the performance of these algorithms.
An algorithm for supervised machine learning based on Bayes’ theorem and the
“naive” assumption of independent features from each training and test dataset is
presented [10].
Vapnik proposed the support vector machine (SVM) in 1995 [11]. A boundary
decision (hyperplane) is used in this classifier to separate input data belonging to one
class from input data belonging to another class. With linear functions separating
input data from output data, an SVM’s optimized hyperplane has the largest margin.
The loss function is used if the input data is separated by a nonlinear function.
Not linearly separated data is transformed into linearly separated data by SVMs
using different kernel transforms. An SVM commonly uses three kernel functions:
polynomial learning machines, radial-based function networks (RBFN), and two-
layer perceptions. RBFN is generally used for training classifiers because it is more
powerful and effective than the other two kernel functions [11, 12]. A classifier like
this can effectively classify input data into two classes but can also classify data into
multiple classes using error-correcting output codes. It is very easy to understand
and has been proven to be accurate.
The decision tree algorithm can solve regression and classification problems [13].
Learning decision rules from the training dataset creates a model that can classify
classes. This algorithm is very simple to understand compared to other supervised
learning algorithms. A tree structure representation is used in this algorithm to solve
the problem. Tree nodes represent dataset attributes, and leaf nodes represent class
labels. A decision tree is a classifier capable of classifying multiclass input datasets.
Several decision tree algorithms are available in the literature [14], such as ID3,
C4.5, C5.0, and CART. Ross Quinlan developed ID3 in 1986, also known as Iterative
Dichotomiser 3. There are multiple trees for each categorical feature of the data
54 5 Artificial Intelligence and Its Applications
given by the algorithm. A C4.5 algorithm replaces ID3 and converts trained trees
into if-then rules. The C5.0 algorithm is the latest version of the ID3 algorithm.
Classification and regression trees, called CART, are similar to the C4.5 algorithm.
However, they support numerical target variables and do not require computing sets
to construct trees.
It is based on constructing multiple decision trees using the random forest algorithm
[15–17]. The input value for each class should be placed on each tree of the
forest when classifying a novel class from an input dataset using a random forest
algorithm. An average value is calculated and assigned a new classification based on
each tree’s classification. A random forest algorithm consists of two stages: creating
the random forest and predicting the classifier based on the generated random forest.
y = Ax + B
. (5.2)
In Eq. 5.2, A and B are the constant factors. In the supervised learning process
using linear regression, the goal is to find the exact value of constants “A” and “B”
using the datasets. Using these values, i.e., the constants, you can predict the future
value of “y” for any value of “x.” Specifically, a simple linear regression involves
a single independent variable, whereas multiple linear regression is used if there is
more than one independent variable.
A learning agent and an environment are the two components of any RL algorithm.
Agents refer to the RL algorithm, while environments refer to objects they act on.
Initially, the environment sends a state to the agent, responding to it based on
its knowledge. The environment sends the agent a pair of next-state values and
rewards in the next step. The agent uses a reward returned by the environment to
evaluate its last action by updating its knowledge. Loops continue until an episode
is terminated by the environment. Q-learning algorithms is an off-policy, model-free
RL algorithm.
There are many similarities between SARSA and Q-learning. SARSA is an on-
policy algorithm, whereas Q-learning is not. Instead of learning the Q-value based
on greedy policy actions, SARSA learns it based on current policy actions.
a solution to this problem. The DQN estimates the Q-value function using a neural
network. The network outputs the corresponding Q-value for each action based on
the input current.
The details of different deep learning algorithms are covered in this section.
Convolutional neural networks (CNNs) are the most commonly used deep learning
neural networks for image-related applications [51–54]. The CNN has three layers:
an input, an output, and many hidden layers. Figure 5.3 shows CNN’s basic
architecture. Several operations are carried out in the hidden layer of CNN, such
as feature extraction, flattening of features, and classification of features.
the relationship between pixels. Based on math, it is an output that uses two
input values: the value of the image pixel and the value of the filter mask.
Strides and padding are also used to extract the features more effectively after the
convolution process. The operation of the steps is used to get better features from
input images. Padding may be necessary when a filter is not applied perfectly to
an input image. The filter works effectively on images with zero values in this
operation.
• Nonlinearity ReLU: Nonlinearity ReLUs are rectified linear units with nonlin-
ear operations performed on convolved features. In essence, it removes negative
values from the convolved features. It can perform various operations, such as
maximum, minimum, mean, etc.
• Pooling: Pooling operation such as upsampling or downsampling reduces the
dimensions of each feature by reducing the dimension size. To reduce the
dimensions of extracted features, CNN uses pooling operations such as max,
sum, and average.
In the last 10 years, research and development have been conducted in AI-
based systems within various areas, such as developing new learning algorithms
60 5 Artificial Intelligence and Its Applications
Various algorithms or models are being developed for machine learning and deep
learning everywhere. Due to the development of new models, more data availability,
and fast computing capabilities, AI applications have increased exponentially.
Healthcare, education, banking, manufacturing, and many other industries have AI
applications. A major challenge in AI-based projects is improving model perfor-
mance. A single structured process cannot guarantee success when implementing
ML and DL applications in business at this time. A model’s performance is one of
the most important factors in developing an AI model. It is mainly a technical factor
that determines model performance. Deploying a machine learning or deep learning
model that isn’t accurate enough for the output makes no sense for many use cases.
In computer vision, computers and systems can detect and interpret meaningful
information from digital images, videos, and other visual signals and then take
appropriate actions or recommend further actions. The following are a few research
areas of well-known computer vision tasks [63]:
• Image Classification: Seeing an image, image classification can classify it (a
dog, an apple, a face). The algorithm is capable of accurately predicting what
class an image belongs to. It might be useful to a social media company to
automatically identify and filter objectionable images uploaded by users.
• Object Detection: Images and videos can be classified using image classification
to identify a certain type of image, which can then be detected and tabulated. A
5.4 AI-Based Research in Various Domains 61
Artificial intelligence that uses computer software to understand text and speech
input in the form of natural language is known as natural language understanding
(NLU). In NLU, humans and computers can interact with each other. Computers
can understand commands without the formal syntax of computer languages by
comprehending human languages, such as Gujarati, Hindi, and English. In addition,
NLU allows computers to communicate with humans in their own language. Two
main techniques are used in natural language processing: syntax and semantic
analysis. The syntax of a sentence determines how words are arranged to make
grammatical sense. Language processing uses syntax to determine meaning from
grammatical rules within a language. The main research areas in AI-based NLU are
text processing, speech recognition, and speech synthesis.
5.4.5 AI in Robotics
Most problems associated with robotic navigation have been solved, at least when
working in static environments. Currently, efforts are being made to train a robot
to interact with the environment in a predictable and generalizable manner. Among
the topics of current interest is manipulation, a natural requirement in interactive
environments. Due to the difficulty of acquiring large labeled datasets, deep learning
is only beginning to influence robotics. Reinforcement learning, which can be
implemented without labeled data, could bridge this gap. However, systems must
be capable of exploring policy spaces without harming themselves or others. A
key enabler of developing robot capabilities will be advanced in reliable machine
perception, including computer vision, force, and tactile perception. These advances
will be driven in part by machine learning.
In the consumer finance area as well as in the global banking sector, artificial
intelligence has many applications. In this industry, artificial intelligence can be
found in the following applications:
1. Fraud Detection: In recent years, financial fraud has been committed on a
massive scale and daily. These crimes cause major disruptions for individuals
and organizations.
2. Stock Market Trading: Stock market floor shouting is a thing of the past. Most
major trading transactions on the stock markets are handled by algorithms that
make decisions and react much faster than humans ever could.
A unique application of artificial intelligence can be found in the insurance world
within the broader financial services landscape. A few examples are:
1. AI-Powered Underwriting: There have been a lot of manual processes used to
make underwriting decisions for decades, and data inputs like medical exams
have been added to the mix. As a result of artificial intelligence, insurance com-
64 5 Artificial Intelligence and Its Applications
panies use massive datasets to assess risks based on factors such as prescription
drug history and pet ownership.
2. Claims Processing: Artificial intelligence can handle simple claims today. A
simple example of it is chatbots. Human involvement in claims decisions will
likely decrease as machine vision and artificial intelligence capabilities increase.
Human labor has traditionally been a big part of healthcare, but artificial intelligence
is becoming an increasingly vital component. Artificial intelligence offers a wide
range of healthcare services, including data mining, diagnostic imaging, medication
management, drug discovery, robotic surgery, and medical imaging, to identify
patterns and provide more accurate diagnoses and treatments. Technology giants
like Microsoft, Google, Apple, and IBM significantly contribute to the healthcare
sector.
It’s unsurprising that artificial intelligence has a wide range of potential appli-
cations in the life sciences because they generate large amounts of data through
experiments. This involves discovering and developing new drugs, conducting more
efficient clinical trials, ensuring treatment is tailored to each patient, and pinpointing
diseases more accurately.
5.5 Industrial Applications of AI 65
In the information age, new technologies have affected many industries. Exten-
sive use of artificial intelligence technology is reported by CB Insights in a 2016
report. A US $ 54 million investment in artificial intelligence is expected by 2020 by
these companies. Here are a few examples of how artificial intelligence is impacting
the healthcare industry today and in the future [67]:
• Maintaining Healthcare Data: Data management has become a widely used
application of AI and digital automation in healthcare due to the necessity of
compiling and analyzing information (including medical records). Using AI-
based systems, data can be collected, stored, reformatted, and traced more
efficiently and consistently.
• Doing Repetitive Jobs: AI systems can perform data entry, X-rays, CT scans,
and other mundane tasks faster and more accurately. It takes a lot of time and
resources to analyze data in cardiology and radiology. In the future, cardiologists
and radiologists should only consider human monitoring in the most critical
cases.
• Design of Treatment Method: The use of artificial intelligence systems helps
physicians select the right, individually tailored treatment for each patient based
on notes and reports in their patients’ files, external research, and clinical
expertise.
• Digital Consultation: AI-powered apps, such as Babylon in the United King-
dom, provide medical consultations based on a user’s medical history and general
knowledge of medicine. The app compares user symptoms with a database of
illnesses using speech recognition. Babylon recommends actions based on the
user’s health history.
• Virtual Nurses: It is possible to monitor patients’ health and follow treatments
between doctor’s visits with the help of startups that have developed digital
nurses. Using machine learning, this program helps chronic illness patients.
Parents of sick children can access basic health information and advice from
Boston Children’s Hospital’s Alexa app. A doctor’s visit is suggested based on
symptoms; the app can answer questions about symptoms.
• Medication Management: An app created by the Patient Institute of Health
monitors the use of patients’ medications. Patients can automatically verify
that they are taking their medications using the smartphone’s webcam and
artificial intelligence. Patients with serious medical conditions, patients who
ignore doctors’ advice, and clinical trial participants are most likely to use this
app.
• Drug Creation: The development of CE actions requires billions of dollars
and more than a decade of research. Increasing the speed and efficiency of
this process can change the world. A computer program powered by artificial
intelligence is being used to scan existing drugs in search of ones that can be
redesigned to combat the Ebola virus. This type of analysis often takes months
or years to find two actions that reduce the risk of Ebola infection in one day.
This analysis typically takes months or years to discover a difference that can
save thousands of lives.
66 5 Artificial Intelligence and Its Applications
There is little room for error in the oil, gas, and energy sector because of safety and
environmental concerns. Energy companies are turning to artificial intelligence to
increase efficiency without incurring costs.
It is critical to use data effectively to optimize individual flights and the more com-
prehensive aviation infrastructure to maintain safe, efficient aviation, particularly in
the context of rising fuel prices. This sector uses AI in the following ways:
• Identify Routes with High Demand: Providing enough flights between specific
destinations while avoiding flying too many routes is crucial to maximizing
profits while retaining customer loyalty. Airlines can use AI models to make
informed decisions about route offerings based on factors like Internet traffic,
macroeconomic trends, and seasonal tourism data.
• Service to Customers: The staffing capacity of most airlines is insufficient to
handle individual customer queries and needs during major disruptions, such as
those caused by massive weather events. AI is increasingly being incorporated
into automated messaging to extract critical information from customer messages
and respond accordingly. The customer may be directed to information about
reporting lost luggage, for instance, if he or she inquires about their luggage.
68 5 Artificial Intelligence and Its Applications
The AI-powered industry has performed various steps to achieve business goals in
any use case. These working steps for an AI-powered industry are shown in Fig. 5.4
and are described below:
• Data Collection Process: This is a very important and basic step for any industry
to find appropriate data for specific use cases or projects. The data can be
obtained from various sources such as publicly available platforms, collaborating
with relevant authorities, etc. The data collection involves various steps, such as
selecting, synthesizing, and sourcing the dataset.
• Data Engineering and Model Development: This second step is for product
development and contains the process of data engineering and model develop-
ment. In the data engineering process, various steps, such as data exploration,
data clearing, data normalizing, feature engineering, and scaling, are performed
to get suitable datasets for model development, while, in the model development
process, model selection, model training, performance evaluation of the model,
and model tuning are performed to get the correct trained model which can be
used for developing a product.
• Production: In this step, the operation of the trained model is tested in various
conditions to check its generalization usage and working ability in various condi-
tions. This process contains various steps: registration, deployment, monitoring,
and retraining.
• Legal Constraints: This is the most important process during product develop-
ment using AI technology for any business case. This process contains various
steps, such as legal and ethical approval, security, and product acceptance in
terms of generalization.
Once any AI-trained model or system fulfilled all required conditions for specific
business use cases as a consumer product, the company can launch this model or
system as a product that can be sold anywhere in the world.
References
1. C.M. Bishop, Pattern Recognition and Machine Learning (Springer International Publishing,
Germany, 2006).
2. K.P. Murphy, Machine Learning—A Probabilistic Perspective (The MIT Press, Cambridge,
2012).
3. I. Goodfellow, Y. Bengio, A. Courville, Deep Learning (The MIT Press, Cambridge, 2016)
4. S.B. Kotsiantis, Supervised machine learning: a review of classification techniques. Informat-
ica 31, 249–268 (2007)
5. R. Thanki, S. Borra, Application of machine learning algorithms for classification and security
of diagnostic images, in Machine Learning in Bio-Signal Analysis and Diagnostic Imaging
(Academic Press, New York, 2019), pp. 273–292
6. A basic introduction to neural networks. http://pages.cs.wisc.edu/~bolo/shipyard/neural/local.
html Accessed Feb 2018
7. S.S. Nath, G. Mishra, J. Kar, S. Chakraborty, N. Dey, A survey of image classification
methods and techniques, in 2014 International Conference on Control, Instrumentation,
Communication and Computational Technologies (ICCICCT) (IEEE, 2014), pp. 554–557
8. S.D. Jawak, P. Devliyal, A.J. Luis, A comprehensive review of pixel-oriented and object-
oriented methods for information extraction from remotely sensed satellite images with a
special emphasis on cryospheric applications. Adv. Remote Sensing 4(3), 177 (2015)
9. T. Lillesand, R.W. Kiefer, J. Chipman, Remote Sensing and Image Interpretation (Wiley, New
York, 2014)
10. H. Zhang, The optimality of naive Bayes. AA 1(2), 3 (2004)
11. V. Vapnik, The Nature of Statistical Learning Theory (Springer, New York, 1995)
12. C.W. Hsu, C.C. Chang, C.J. Lin, A practical guide to support vector classification (2016).
https://www.csie.ntu.edu.tw/~cjlin/papers/guide/guide.pdf. Accessed Feb 2018
13. R. Saxena, How Decision Tree Algorithm Works (2017): https://dataaspirant.com/2017/01/30/
how-decision-tree-algorithm-works/. Accessed Aug 2018
14. A.D. Kulkarni, A. Shrestha, Multispectral image analysis using decision trees. Int. J. Adv.
Comput. Sci. Appl. 8(6), 11–18 (2017)
15. A. Liaw, M. Wiener, Classification and regression by random forest. R news 2(3), 18–22 (2002)
16. M.R. Segal, Machine Learning Benchmarks and Random Forest Regression (Kluwer Academic
Publishers, Netherlands, 2004)
17. T.F. Cootes, M.C. Ionita, C. Lindner, P. Sauer, Robust and accurate shape model fitting using
random forest regression voting, in European Conference on Computer Vision (Springer,
Berlin, 2012), pp. 278–291
18. D.N. Kumar, Remote Sensing (2014). https://nptel.ac.in/courses/105108077/. Accessed July
2018
19. K. Wagstaff, C. Cardie, S. Rogers, S. Schrödl, Constrained k-means clustering with background
knowledge, in ICML, vol. 1 (2001), pp. 577–584
20. J.A. Hartigan, M.A. Wong, Algorithm AS 136: a k-means clustering algorithm. J. R. Stat. Soc.
C (Appl. Stat.) 28(1), 100–108 (1979)
21. T. Kanungo, D.M. Mount, N.S. Netanyahu, C.D. Piatko, R. Silverman, A.Y. Wu, An efficient
k-means clustering algorithm: analysis and implementation. IEEE Trans. Pattern Anal. Mach.
Intell. 24(7) 881–892 (2002)
22. K. Alsabti, S. Ranka, V. Singh, An efficient k-means clustering algorithm (1997)
70 5 Artificial Intelligence and Its Applications
23. A. Likas, N. Vlassis, J.J. Verbeek, The global k-means clustering algorithm. Pattern Recogn.
36(2), 451–461 (2003)
24. L. Kaufman, P.J. Rousseeuw, Finding Groups in Data: An Introduction to Cluster Analysis,
vol. 344 (Wiley, New York, 2009)
25. A.K. Jain, R.C. Dubes, Algorithms for clustering data (1988)
26. K. Mehrotra, C.K. Mohan, S. Ranka, Elements of Artificial Neural Networks (MIT Press,
Cambridge, 1997)
27. I. Jolliffe, Principal component analysis, in International Encyclopedia of Statistical Science
(Springer, Berlin, 2011), pp. 1094–1096
28. R.A. Schowengerdt, Remote Sensing: Models and Methods for Image Processing (Elsevier,
Amsterdam, 2006)
29. R.C. Gonzalez, R.E. Woods, S.L. Eddins, Digital Image Processing Using MATLAB, vol. 624
(Pearson-Prentice-Hall, Upper Saddle River, 2004)
30. P. Comon, Independent component analysis, a new concept? Signal Process. 36(3), 287–314
(1994)
31. X. Benlin, L. Fangfang, M. Xingliang, J. Huazhong, Study on independent component
analysis application in classification and change detection of multispectral images. Int. Archiv.
Photogramm. Remote Sensing Spatial Inform. Sci. 37(B7), 871–876 (2008)
32. I. Dópido, A. Villa, A. Plaza, P. Gamba, A quantitative and comparative assessment of
unmixing-based feature extraction techniques for hyperspectral image classification. IEEE J.
Sel. Topics Appl. Earth Observ. Remote Sensor 5(2), 421–435 (2012)
33. M.S.M. Al-Taei, A.H.T. Al-Ghrairi, Satellite image classification using moment and SVD
method. Int. J. Comput. 23(1), 10–34 (2016)
34. S. Brindha, Satellite image enhancement using DWT–SVD and segmentation using MRR–
MRF model. J. Netw. Commun. Emerg. Technol. 1(1), 6–10 (2015)
35. R.K. Jidigam, T.H. Austin, M. Stamp, Singular value decomposition and metamorphic
detection. J. Comput. Virol. Hacking Techn. 11, 203–216 (2015)
36. C. Biernacki, G. Celeux, G. Govaert, Assessing a mixture model for clustering with the
integrated completed likelihood. IEEE Trans. Pattern Anal. Mach. Intell. 22(7), 719–725
(2000)
37. C. Biernacki, G. Celeux, G. Govaert, Choosing starting values for the EM algorithm for getting
the highest likelihood in multivariate Gaussian mixture models. Comput. Stat. Data Anal. 41(3-
4), 561–575 (2003)
38. Z. Zivkovic, Improved adaptive Gaussian mixture model for background subtraction, in
Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004,
vol. 2 (IEEE, 2004), pp. 28–31
39. C. Maugis, G. Celeux, M.L. Martin-Magniette, Variable selection for clustering with Gaussian
mixture models. Biometrics 65(3), 701–709 (2009)
40. G. McLachlan, D. Peel, Finite Mixture Models. Wiley Series in Probability and Statistics
(2000)
41. T. Kohonen, Self-organized formation of topologically correct feature maps. Biol. Cybern.
43(1), 59–69 (1982)
42. T. Kohonen, Analysis of a simple self-organizing process. Biol. Cybern. 44(2), 135–140 (1982)
43. H. Ritter, T. Kohonen, Self-organizing semantic maps. Biol. Cybern. 61(4), 241–254 (1989)
44. J.A. Kangas, T.K. Kohonen, J.T. Laaksonen, Variants of self-organizing maps. IEEE Trans.
Neural Netw. 1(1), 93–99 (1990)
45. E. Erwin, K. Obermayer, K. Schulten, Self-organizing maps: ordering, convergence properties
and energy functions. Biol. Cybern. 67(1), 47–55 (1992)
46. S. Kaski, T. Honkela, K. Lagus, T. Kohonen, WEBSOM—self-organizing maps of document
collections. Neurocomputing 21(1–3), 101–117 (1998)
47. M. Dittenbach, D. Merkl, A. Rauber, The growing hierarchical self-organizing map, in Neural
Networks, 2000. IJCNN 2000, Proceedings of the IEEE-INNS-ENNS International Joint
Conference on, vol. 6 (IEEE, 2000), pp. 15–19
References 71
48. D. Fumo, Types of Machine Learning Algorithms You Should Know (2017): https://
towardsdatascience.com/types-of-machine-learning-algorithms-you-should-know-
953a08248861. Accessed Mar 2020
49. Q-learning in python. https://www.geeksforgeeks.org/q-learning-in-python/. Accessed Mar
2020
50. R. Moni, (SmartLab AI), Reinforcement learning algorithms—an intuitive overview
(2019): https://medium.com/@SmartLabAI/reinforcement-learning-algorithms-an-intuitive-
overview-904e2dff5bbc. Accessed Feb 2023
51. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional
neural networks, in Advances in Neural Information Processing Systems (2012), pp. 1097–
1105
52. S. Lawrence, C.L. Giles, A.C. Tsoi, A.D. Back, Face recognition: a convolutional neural-
network approach. IEEE Trans. Neural Netw. 8(1), 98–113 (1997)
53. H. Kandi, D. Mishra, S.R.S. Gorthi, Exploring the learning capabilities of convolutional neural
networks for robust image watermarking. Comput. Secur. 65, 247–268 (2017)
54. S.M. Mun, S.H. Nam, H.U. Jang, D. Kim, H.K. Lee, A robust blind watermarking using
convolutional neural network (2017). ArXiv preprint arXiv: 1704.03248
55. A. Krizhevsky, I. Sutskever, G.E. Hinton, Imagenet classification with deep convolutional
neural networks. Commun. ACM 60(6), 84–90 (2017)
56. LeNet, http://deeplearning.net/tutorial/lenet.html. Accessed Feb 2019
57. Faster R-CNN, https://github.com/rbgirshick/py-faster-rcnn. Accessed Feb 2019
58. GoogLeNet, https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/googlenet.
html. Accessed Feb 2019
59. ResNet, https://github.com/gcr/torch-residual-networks. Accessed Feb 2019
60. R. Wang, T. Lei, R. Cui, B. Zhang, H. Meng, A.K. Nandi, Medical image segmentation using
deep learning: a survey. IET Image Process. 16(5), 1243–1267 (2022)
61. AI Research Trends (2016). https://ai100.stanford.edu/2016-report/section-i-what-artificial-
intelligence/ai-research-trends. Accessed Feb 2023
62. P. Soni, Eight hot research domain topics in Artificial Intelligence (2020). https://er.yuvayana.
org/8-hot-research-domain-topics-in-artificial-intelligence/. Accessed Feb 2023
63. Computer Vision Examples (2023). https://www.ibm.com/topics/computer-vision. Accessed
Feb 2023
64. Personalized Recommendation Systems: Five Hot Research Topics You Must Know (2018).
https://www.microsoft.com/en-us/research/lab/microsoft-research-asia/articles/personalized-
recommendation-systems/. Accessed Feb 2023
65. Examples of Artificial Intelligence (AI) in 7 Industries (2022). https://emeritus.org/blog/
examples-of-artificial-intelligence-ai/. Accessed Feb 2023
66. Cem Dilmegani (2022). Applications of AI in Manufacturing in 2023. https://research.
aimultiple.com/manufacturing-ai/. Accessed Jan 2023
67. M. Kesavan, How Will Artificial Intelligence Reshape The Telecom Industry? (2022). https://
itchronicles.com/artificial-intelligence/how-will-artificial-intelligence-reshape-the-telecom-
industry/. Accessed Jan 2023
68. Swetha, 10 common applications of artificial intelligence in healthcare (2018). https://medium.
com/artificial-intelligence-usm-systems/10-common-applications-of-artificial-intelligence-
in-health-care-9d34ccccda5c. Accessed Apr 2020
Chapter 6
Advanced Technologies for Industrial
Applications
Technology is rapidly evolving today, allowing for faster change and progress and
accelerating the rate of change. Emerging technology is exciting, especially when
it gives a field unexplored possibilities. Each year, new technologies grow more
ubiquitous, and the digital tools that will be available in the future will be no
exception. AI-as-a-Service and edge computing are only two examples of new tools
that allow businesses to complete tasks in novel ways. The world is undergoing the
fourth industrial revolution based on advanced technologies such as AI, machine
learning, the Internet of Things, blockchain, etc. Researchers discovered ways to
maintain and grow capacity, work securely, and meet the needs of important areas
like medical devices and producing the ordinary things that keep the world running.
In many respects, technology-enabled manufacturers handle problems, with
features like automation and remote monitoring and operation leading the way and
allowing them to ramp up operations while keeping people safe and healthy.
In this chapter, we will discuss a few key and extremely valuable tools for
industries in a number of different ways.
away soon. Mobile applications, robots, Wi-Fi cameras, scanners, and drones are
being used to stop the spread of viruses. The significant contribution of digital
technology to pandemic control has been impacted by Industry 4.0.
However, due to the diverse range of Industry 4.0 technologies, such as mobile,
cloud computing, Big Data, analytics tools, machine-to-machine (M2M), 3D print-
ing, and robots, the route to digital transformation is not easy, although these were
some of the technologies that sparked Industry 4.0 much broader.
The term “Industrial Internet of Things” was created to explain the Internet
of Things (IoT) as it is used in a variety of industries, including manufacturing
(Industry 4.0), logistics, oil and gas, transportation, energy/utilities, mining and
metals, aviation, and other industrial sectors, as well as the use cases that are specific
to these industries.
These technologies are part of the industrial Internet of Things (IIoT), one of
the most well-known technology concepts. In an industrial context, the IIoT is a
physical network of things, objects, or devices (that have embedded technology) for
sensing and remote control, allowing better integration between the physical and
cyber worlds.
To create an even more complicated system that includes people, robots, and
machines, the Internet of Everything (IoE) generalizes machine-to-machine (M2M)
connections for the Internet of Things (IoT). IoT in healthcare is generally called
IoHT (Internet of Healthcare Things) or the Internet of Medical Things (IoMT).
IoHT primarily focuses on the wireless connecting of a network of medical
equipment and body sensors with the cloud to gather, analyze, organize, and process
health data. The proper protocols are used for secure connections and effective
machine-to-machine data transfer by health devices that collect data in real time.
The Internet of Things (IoT) is a network of smart devices, wireless sensors, and
systems that combine several recent technological advancements. In the healthcare
industry, low-power, low-latency technologies are in high demand. With the devel-
opment of wireless communications, the network structure has significantly changed
in this time period. Additionally, some research investigates how the Internet
of Things (IoT) will support the next-generation network architecture, indicating
how embedded devices can quickly connect with one another. The configuration
of wireless and low-power, low-latency medical equipment for IoT devices will
fundamentally alter healthcare.
Continuous monitoring of the health of an unexpectedly large number of patients
throughout both the pre- and post-infection stages is highly essential during the
COVID-19 pandemic. Both carers or healthcare practitioners and patients have
effectively adopted remote patient monitoring, screening, and treatment via tele-
health facilitated by the Internet of Health Things (IoHT). Smart devices powered by
the Internet of Things (IoHT) are proliferating everywhere, especially in the midst
6.1 Industrial IoT (IIoT) 75
of a global epidemic. However, healthcare is seen as one of the IoT’s most difficult
application sectors due to a large number of needs. The graphical overview of IoHT
is shown in Fig. 6.1.
IoHT has a great ability to produce good results with the aid of cutting-edge
technologies. In the field of medicine, it has become a new reality of an original
concept that offers COVID-19 patients the greatest care and conducts accurate
surgery. During the ongoing pandemic, complicated situations are readily managed
and controlled digitally. IoHT takes on fresh issues in the medical industry to
develop top-notch assistance programs for physicians, surgeons, and patients. To
implement IoHT successfully, certain process steps are carefully identified. These
steps included the setup of networking protocols and the acquisition of sensor data
with a secured transmission system.
IoHT integrates machines, tools, and medical supplies to produce intelligent
information systems that are tailored to the needs of each COVID-19 patient. An
alternative interdisciplinary strategy is required to maximize output, quality, and
understanding of emerging diseases. IoHT technology tracks change in critical
patient data to get pertinent data. The various IoHT technologies that were useful in
healthcare during the COVID-19 pandemic are covered in Table 6.1.
76 6 Advanced Technologies for Industrial Applications
Robotics is the study of robots, which are electromechanical devices used for
various tasks. Due to their popularity and ability to accomplish activities that people
cannot do, the most common robots are used in dangerous environments. For the
past 15–20 years, the most common uses of robotics have involved basic industrial
or warehouse applications or teleoperated mobile robots with cameras to see objects
out of reach. For instance, flying robots (also known as drones) are used for disaster
response, in addition to automated guided vehicles (AGVs) for material movement
6.2 Autonomous Robots 77
in factories and warehouses, and underwater robots are employed to search for and
find shipwrecks in the deepest parts of our oceans.
Despite the fact that using robots in this way has been very successful over the
years, the usage of completely autonomous robots is in no way represented by these
examples. Some robots can do activities independently, while others require human
assistance to complete or direct them. Robots can be employed in various industries,
including the medical area, military applications, and space communication.
Based on the control system it features, an automatic robot is a sort of
manipulated robotic system regarded as one of the early robotic systems. Automatic
robots are categorized into four basic groups based on their traits and intended
uses.
1. Programmable robots
2. Non-programmable robots
3. Adaptive robots
4. Intelligent robots (collaborative robots and soft robots)
Here, in the next sections are surveys of use cases of intelligent robots: cobots and
soft robots. The main goal of this chapter is to focus on the advanced technologies
used in different industrial domains. Many businesses and organizations comprise
the automotive industry, which aims to design, develop, market, manufacture, and
sell automobiles using human interface robots.
Fig. 6.2 Example of soft robotics (Source: Humanoid robot on mild cognitive impairment older
adults)
Table 6.2 Different soft robots with their specific use case
Sr. no. Type of soft robot Purpose of soft robot
1 Soft robotic catheters Navigate complicated, curved blood tubes or other
bodily par
2 Soft robotic exoskeletons Help mobility-impaired people recover
3 Soft robotic prosthetic devices To be more flexible and patient-friendly than typical
prosthetic systems
4 Soft robotic endoscopes Explore complicated bodily cavities
5 Octobot Underwater exploration and monitoring
6 Soft robotic puppets For entertainment purposes, such as in theme parks
or interactive exhibits
The first prototype of the Pisa/IIT SoftHand [7], a highly integrated robot hand
with a humanoid shape, robustness, and compliance, is displayed and described.
Extensive grab cases and grasp force measurements finally support Fig. 6.3’s hand.
A dual-arm mobile platform called ALTER-EGO [8], designed by its authors
using soft robotic technology for the actuation and manipulation layers, is shown
in Fig. 6.4. The flexibility, adaptivity, and robustness of this type of technology’s
6.4 Human and Machine Interfacing (HMI) 81
features enable ALTER-EGO to interact with its surroundings and objects and
improve safety when the robot is near people.
In today’s world, automation and control have replaced human work, which is
also an advancement toward smarter cities. The automotive industry is one of
the reputed examples of smart manufacturing units. Various organizations and
companies provide smart manufacturing in automotive industries by giving robotic
solutions and decreasing manpower in field sites.
ABB is a Swiss company with over 130 years of technical innovation. ABB is a
pioneer in Industry 4.0 and a leader in industrial digitization today. Robots made
by ABB are robust, adaptive, and versatile thanks to their single- and dual-arm
designs. An extensive selection of industrial robots is available from KUKA. No
matter the application’s difficulty, you will always discover the appropriate one.
Also, Rockwell automation provides feasible and efficient solutions for various
automotive industries.
Industrial robotics is becoming more and more prevalent due to their effec-
tiveness and precision, especially in the manufacturing business, even though full
automation and the employment of robots in a residential setting are still the
exceptions rather than the rule. The Statista Technology Market Forecast predicts
that by 2021, over 500,000 industrial robot systems will be in use worldwide.
According to Fig. 6.5, based on the corresponding dataset, sales of robots targeted
toward two industries in particular account for the largest portion of total revenue.
By introducing edge computing and beyond 5G networking, it becomes easy
to be everything available at remote sites faster and with the minimum end-to-end
transmission delay. The Internet of Things enables better transportation efficiency,
cutting-edge vehicle management capabilities, and a superior driving experience in
the automotive sector, paving the path for autonomous cars, which were formerly
thought to be a future vision.
More complicated improvements will become available as embedded vehicle IoT
systems develop. Additionally, the ability of linked car technology and the speed at
which mobile communications develop allow automakers to keep introducing fresh
and intriguing services.
HMIs are user interfaces or dashboards that connect people to machines, systems,
and devices. HMI is most commonly used in industrial processes for screens that
allow users to interact with devices. Graphical user interfaces (GUIs) and human-
machine interfaces (HMIs) have some similarities but differ. GUIs are often used
82 6 Advanced Technologies for Industrial Applications
Fig. 6.5 Industrial robot revenues. Source: Statista Technology Market Outlook
for visualization in HMIs. The main purpose of HMI examples is to provide insight
into mechanical performance and progress, regardless of the format or the term you
use to refer to them. HMIs can be used in industrial settings to:
• Data visualization.
• You can track the time, trends, and tags associated with your production.
• Monitor key performance indicators.
• Monitor the outputs and inputs of the machine.
Most industrial organizations use HMI technology to interact with their machines
and optimize their industrial processes. Operators, system integrators, and engineers
use HMIs the most, especially control system engineers [9]. For these professionals,
HMIs are essential tools for reviewing and monitoring processes, diagnosing
problems, and displaying data. HMI is used in the following industries:
• Energy and power
• Food
• Manufacturing and production
• Gas and oil
• Transportation
• Water processing
• And many more
6.5 AI Software 83
6.5 AI Software
• Azure Machine Learning Studio: You can create and deploy robust machine
learning models with the Azure Machine Learning Studio. TensorFlow, PyTorch,
Python, R, and other open-source frameworks and languages are among those
supported by the platform. A wide range of users, including developers and
scientists, can benefit from Microsoft AI software.
• Infosys Nia: Businesses and enterprises can simplify AI implementation with
Infosys Nia, an AI software platform. A wide range of tasks is possible with it,
such as deep learning, natural language processing (NLP), data management, etc.
Companies can automate repetitive tasks and schedule responsibilities with AI on
existing Big Data using Infosys Nia. Thus, organizations can be more productive,
and workers can accomplish their tasks more efficiently.
• Salesforce Einstein: Businesses can use Salesforce Einstein to build AI-enabled
applications for their customers and employees with Salesforce’s analytics AI
platform for CRM (customer relationship management). Predictive models can
be built using machine learning, natural language processing, and computer
vision. Model management and data preparation are not required with artificial
intelligence tools.
• Chorus.ai: Specifically designed for sales teams on the verge of growth,
Chorus.ai offers conversation intelligence features. The application assists you in
recording, managing, and transcribing calls in real time and marking important
action items and topics.
• Observe.AI: With Observe.AI, businesses can transcribe calls and improve
performance by using automated speech recognition. User-friendly automation
tools are available in both English and Spanish. Using the most recent speech and
natural language processing technology allows businesses and organizations to
analyze calls effectively. Other business intelligence tools can also be integrated
with the tool.
• TensorFlow 2.0: For developers, TensorFlow (TF) is an open-source machine
learning and numerical computation platform based on Python. Artificial intelli-
gence software TensorFlow was created by Google.
• H2O.ai: Businesses can easily train ML models and apps with H2O.ai, an end-to-
end platform. Using AutoML functionality, beginners and experts can create or
train AI models. Besides tabular data, the platform can handle text, images, audio,
and video files. Businesses can manage digital advertising, claims management,
fraud detection, and advanced analytics and build a virtual assistant with the
open-source machine learning solution for enterprises.
• C3 AI: As a provider of AI SaaS (software as a service), C3 AI provides AI
software as a service (SaaS) to accelerate digital transformation and build AI
applications. The C3 AI Suite and C3 AI applications are available from C3.ai
as software solutions for artificial intelligence. This AI platform company offers
a variety of commercial applications, including energy management, predictive
maintenance, fraud detection, anti-money laundering, inventory optimization,
and predictive CRM.
• IBM Watson: Using IBM Watson, companies and organizations can auto-
mate complex machine learning processes, predict future results, and optimize
6.5 AI Software 85
to the product’s website, Theano uses machine learning to diagnose bugs and fix
malfunctions independently, with minimal support from outside.
• OpenNN: OpenNN, an open-source software library that uses neural network
technology, can interpret data more quickly and accurately. OpenNN claims to be
faster than its competitors at analyzing and loading massive datasets and training
models, according to its website.
• Tellius: Tellius, an AI-driven software, helps businesses better understand
their strategies, successes, and growth opportunities. Using Tellius’s platform,
employees can access an intelligent search function that organizes data and
makes it easier to understand. Their business outcomes can be analyzed and
understood through this process.
• Gong.io: Gong.io, an AI-driven platform, analyzes customer interactions, fore-
casts future deals, and visualizes sales pipelines.
• Zia by Zoho: With Zoho’s Zia, companies can gather organizational knowledge
and turn customer feedback into strategies using a cloud-based AI platform.
According to Zia’s website, its AI tools can analyze client schedules, sales
patterns, and workflow patterns to improve employee performance.
• TimeHero: Users can manage their projects, to-do lists, and schedules using
TimeHero’s AI-enabled time management platform. According to TimeHero’s
site, the platform’s machine learning capabilities can notify employees when
meetings occur, when emails are due, and when projects are due.
Augmented reality (AR) and virtual reality (VR) are two different technologies used
to enhance the experience of interacting with the digital world. Augmented reality
(AR) is a technology that overlays digital information in the real-world environment.
This technology can be experienced through smartphones, tablets, smart glasses,
or headsets. It enhances real-world experiences by adding digital elements such as
images, videos, sounds, and 3D models to the physical world.
Virtual reality (VR) is a technology that creates a simulated, computer-generated
environment that can be experienced through a headset or a display. VR immerses
the user in an artificial environment, creating the illusion of being in a different
world. The user can interact with the environment and other objects through physical
movements and controllers. Augmented and virtual reality (AR/VR) have significant
potential to transform numerous industries and bring about innovative solutions to
challenges faced by various sectors. Here are some examples of how AR/VR is
being used in different sectors:
1. Education: AR/VR can create immersive and interactive learning environments
that enhance student engagement and knowledge retention. For example, VR can
be used to simulate scientific experiments or historical events, while AR can be
used to provide real-time feedback to students during a lesson.
6.6 Augmented and Virtual Reality (AR/VR) 87
Blockchain is a distributed ledger technology that allows for secure and transparent
transactions between parties without needing a trusted intermediary. Conversely,
cybersecurity protects computer systems and networks from unauthorized access,
theft, damage, and other threats. Blockchain technology has a significant potential
in the realm of cybersecurity. Here are some ways that blockchain can enhance
cybersecurity:
1. Immutable Record-Keeping: Blockchain technology’s decentralized and
immutable nature makes it difficult to tamper with data stored on a blockchain,
providing greater security and integrity.
2. Cryptographic Security: Blockchain uses cryptography to secure transactions
and protect sensitive data. The use of cryptographic techniques can make it
difficult for attackers to access or manipulate data.
3. Distributed Security: Because blockchain is a distributed technology, there is
no central point of failure, and the network is less vulnerable to hacking or other
forms of cyber-attacks.
4. Decentralized Identity Management: Blockchain technology can be used for
decentralized identity management, where individuals can control their personal
data and authenticate their identity without relying on a central authority.
5. Smart Contract Security: Smart contracts, which are self-executing contracts
with the terms of the agreement written into code, can be used to automate
transactions and reduce the risk of fraud or human error.
Overall, blockchain technology can provide greater security and transparency
in the realm of cybersecurity. By creating a decentralized and secure environment,
blockchain has the potential to reduce the risk of cyber-attacks and protect sensitive
data. However, it is important to note that blockchain technology is not a panacea for
all cybersecurity issues, and proper implementation and management are essential
to ensure its effectiveness.
Various methods are used in blockchain and cybersecurity to ensure the security
and integrity of data stored on a blockchain network. Here are some of the most
common methods:
1. Encryption: Encryption is the process of converting plain text data into a coded
form that can only be read by authorized parties. This technique is commonly
used in blockchain to protect sensitive data.
2. Hashing: Hashing transforms data into a fixed-length string of characters
representing the original data. This technique is used in blockchain to create a
unique identifier for each data block, ensuring the data’s integrity.
3. Digital Signatures: Digital signatures are used to authenticate the sender’s
identity of a message or transaction. In the blockchain, digital signatures are
used to ensure that only authorized parties can access and modify data stored on
the blockchain.
6.7 Blockchain and Cybersecurity 89
Deep learning was first applied to real-world tasks in our signal processing
community for speech recognition [13] and was followed by computer vision,
natural language processing, robotics, speech synthesis, and image rendering [14].
Although deep learning and other machine learning approaches have shown impres-
sive empirical success, many issues remain unsolved. In contrast to conventional
linear modeling methods, deep learning methods are typically not interpretable.
Although deep learning methodologies achieve recognition accuracy similar to
or better than humans in many applications, they consume much more training
data, power, and computing resources. Furthermore, despite statistically impressive
results, individual accuracy results are often unreliable. Additionally, most of the
current deep learning models lack reasoning and explanation capabilities, making
them susceptible to catastrophic failures or attacks without the ability to anticipate
and prevent them.
Fundamental as well as applied research is needed to overcome these challenges.
Developing interpretable deep learning models could be a breakthrough in machine
learning to create new algorithms and methods that can overcome the limitations
of machine learning systems in not being able to explain actions, decisions, and
prediction outcomes to human users while promising to perceive, learn, decide,
and act independently. By understanding and trusting the system’s output, users
can predict future behavior and understand the system’s outputs. Machine learning
systems should be capable of creating models that explain how the world works
when neural networks and symbolic systems are integrated. Their prediction and
decision-making processes will be interpretable in symbolic and natural language
by them as they discover the underlying causes or logical rules that govern them.
New algorithms for reinforcement learning and unsupervised deep learning
could be a breakthrough in machine learning research, which use weak or no
training signals paired with inputs to guide the learning process. By interacting with
adversarial environments and with themselves, reinforcement learning algorithms
can allow machines to learn. However, unsupervised learning has remained the
most challenging problem for which no satisfactory algorithm has been developed.
There has been a significant delay in developing unsupervised learning techniques
compared to supervised and reinforcement deep learning techniques. Recent devel-
opments in unsupervised learning enable training prediction systems without labels
by utilizing sequential output structures and advanced optimization methods.
In today’s world, a variety of imaging technologies provide great insights into the
body’s anatomical and functional processes, including magnetic resonance imaging
(MRI), computed tomography (CT), positron emission tomography (PET), optical
92 6 Advanced Technologies for Industrial Applications
coherence tomography (OCT), and ultrasound. There are still fundamental trade-
offs between these aspects due to operational, financial, and physical constraints,
even though such imaging technologies have improved significantly over time
regarding resolution, signal-to-noise ratio (SNR), and acquisition speed. Because of
noise, technology-related artifacts, poor resolution, and contrast, the acquired data
can be largely unusable in raw form. Due to its complexity, it is also challenging for
scientists and clinicians to interpret and analyze biomedical imaging data effectively
and efficiently. Biomedical imaging researchers are developing new and exciting
ways to resolve issues associated with the imaging of the human body, helping
clinicians, radiologists, pathologists, and clinical researchers visualize, diagnose,
and understand various diseases.
Although natural language processing is a powerful tool, it still has limitations and
issues: homonyms and contextual words, synonyms, sarcasm and irony, ambiguous
situations, speech or text errors, slang and colloquialisms, languages specific to a
particular domain, and languages with low resources [15].
A machine learning system requires a staggering amount of training data to work
correctly. As a result, NLP models become more intelligent as they are trained on
more data. Despite this, data (and human language!) are only increasing, as are
machine learning techniques and algorithms tailored to a particular problem. More
research and new methods will be needed to improve all these problems. NLP
techniques, algorithms, and models can be developed using advanced techniques
like artificial neural networks and deep learning. We will likely be able to come
up with solutions to some of these challenges shortly as they grow and strengthen.
Many of the limitations of NLP processing can be significantly eased with SaaS text
analysis platforms like MonkeyLearn. In addition to automating customer service
processes and collecting customer feedback, MonkeyLearn’s no-code tools offer
huge NLP benefits to streamline customer service processes.
6.8.4 Robotics
This section describes some open challenges when any robot is designed for
specified applications [16]. These challenges are as per below:
1. Developing a Motion Plan: A robot must reach from one point to another
without getting stuck anywhere along the way. Since the robot’s surrounding
environment is always dynamic, it is still an open research question. The
robots must fetch this information and adapt to changing environments. Open
6.8 Challenges and Open Research Problems in Various Domains 93
(MIMO), mass-offering mobile broadband (MBB) access to the Internet has been
the dominant theme of wireless communications for the past two decades. As the
Internet of Things (IoT) and Industry 4.0 emerge, wireless communications will face
new technical challenges. For example, multisensory virtual reality and UltraHD
video increase spectral efficiency and explore extreme frequency bands.
Future wireless systems must simultaneously accommodate rapidly growing
enhanced MBB services, mission-critical equipment, and IoT devices. A high
degree of reliability, low latency, and energy efficiency are required for advanced
IoT applications. In addition, multidimensional sensing and accurate localization
will be essential for human-centric services in the future. The computing, com-
munication, and control operations in Industry 4.0 must be fully integrated with
artificial intelligence and machine learning. Costa and Yang identified various
wireless communication challenges mentioned below [17]:
1. Security and privacy
2. Utilization of spectrum
3. Development of communication infrastructure
4. Enhancement in energy efficiency
5. Integration of wireless information and power transfer
6. Development of wireless access techniques
7. Analysis of dynamic architecture and network function
8. Coding and modulation
9. Resources and interference management
References
1. W. Zhao, Y. Zhang, N. Wang, Soft robotics: research, challenges, and prospects. J. Robot.
Mechatron. 33(1), 45–68
2. D. Trivedi, C.D. Rahn, W.M. Kier, I.D. Walker, Soft robotics: Biological inspiration, state of
the art, and future research. Appl. Bionics Biomech. 5(3), 99–117 (2008)
3. M. Manca, F. Paternò, C. Santoro, E. Zedda, C. Braschi, R. Franco, A. Sale, The impact of
serious games with humanoid robots on mild cognitive impairment older adults. Int. J. Hum.-
Comput. Stud. 145, 102509 (2021)
4. V. Bonnet, J. Mirabel, D. Daney, F. Lamiraux, M. Gautier, O. Stasse, Practical whole-body
elasto-geometric calibration of a humanoid robot: application to the TALOS robot. Robot.
Auton. Syst. 164, 104365 (2023)
5. C. Esterwood, L.P. Robert Jr, Three Strikes and you are out!: The impacts of multiple human–
robot trust violations and repairs on robot trustworthiness. Comput. Hum. Behav. 142, 107658
(2023)
6. R. Wen, A. Hanson, Z. Han, T. Williams, Fresh start: encouraging politeness in Wakeword-
driven human-robot interaction, in 2023 ACM/IEEE International Conference on Human-
Robot Interaction (HRI) Stockholm, Sweden (2023)
7. M.G. Catalano, G. Grioli, E. Farnioli, A. Serio, C. Piazza, A. Bicchi, Adaptive synergies for
the design and control of the Pisa/IIT SoftHand. Int. J. Robot. Res. 33(5), 768–782 (2014)
8. G. Lentini, A. Settimi, D. Caporale, M. Garabini, G. Grioli, L. Pallottino, M.G. Catalano,
Bicchi, A. Alter-ego: a mobile robot with a functionally anthropomorphic upper body designed
for physical interaction. IEEE Robot. Autom. Mag. 26(4), 94–107 (2019)
References 95
A F
Adaptive .κ-nearest neighbor algorithm, 12–13 Financial, 62–64, 89, 90, 92
Advanced technologies, vii, 3, 5, Finite impulse response (FIR) filter, 25, 26
73–94 Fourier transform, 23–24
Agriculture, 17, 29, 45, 78
Artificial intelligence (AI), vii, viii 1, 3–5, 45,
49–69, 73, 76, 83–86, 94 H
Autonomous robots, vii, 76–81 Healthcare, 1–4, 29, 47, 60, 64–66, 73–76, 78,
87, 90, 93
B I
Biomedical imaging, 5, 91–92 Image compression, 38, 43–44
Blockchain, 3, 5, 73, 88–90 Image processing, 3, 4, 25, 33–49
Industrial applications of time varying system,
15–17
C Industrial Internet of Things (IIoT), 73–76
Collaborative robots (Cobots), 77–78 Infinite impulse response (IIR) filter, 25–27
Computer vision, vii, 4, 33, 39, 43, 44, 49,
60–62, 83, 84, 91 M
Convolutional neural network (CNNs), 41, 44, Machine learning (ML), 1–5, 45, 49–60, 62,
58–60 64–66, 73, 76, 83–86, 91–92, 94
Cybersecurity, 5, 85, 88–90 Manufacturing, 2, 15, 45, 46, 60, 63, 64, 74,
78, 81–83, 87
D
N
Deep learning (DL), 49, 51, 58–60, 62, 76, 84,
Nanotechnology, 28
91, 92
Natural language processing (NLP), 5, 60, 61,
Defense, vii, 48, 87
83, 84, 90–92
Digital image, 4, 21, 33, 43, 45, 46, 48,
60
Digital TV technology, 30 O
Discrete signal, 20–22 Object detection, 38, 44, 49, 60–61
R T
Robust control method, 12–15 Time varying system identification, 7, 8, 10–13
S W
Signal processing, vii, 3, 4, 19–30, 42, 91 Wavelets, 20, 21, 24–25, 34, 37
Soft robotics in automotive industries, 78–81 Wireless communications, 16, 27, 29, 74,
System identification, vii, viii 3, 7–17 93–94