All Units Notes - IM
All Units Notes - IM
COURSE OBJECTIVES:
To understand the importance of information in business
To know about the recent information systems and technologies.
UNIT I INTRODUCTION 9
Data, Information, Information System, evolution, types based on functions and hierarchy,
Enterprise and functional information systems.
System development methodologies, Systems Analysis and Design, Data flow Diagram (DFD),
Decision table, Entity Relationship (ER), Object Oriented Analysis and Design (OOAD), UML
diagram.
DBMS – types and evolution, RDBMS, OODBMS, RODBMS, Data warehousing, Data Mart,
Data mining.
Knowledge based decision support systems, Integrating social media and mobile technologies in
Information system, Security, IS Vulnerability, Disaster Management, Computer Crimes,
Securing the Web.
Introduction to Deep learning, Big data, Pervasive Computing, Cloud computing, Advancements
in AI, IoT, Block chain, Crypto currency, Quantum computing
Total: 45 Periods
REFERENCES:
1. Robert Schultheis and Mary Sumner, Management Information Systems – The Manager’s
View, Tata McGraw Hill, 2008.
2. Kenneth C. Laudon and Jane P Laudon, Management Information Systems – Managing the
Digital Firm, 15 th edition, 2018.
3. Panneerselvam. R, Database Management Systems, 3rd Edition, PHI Learning, 2018.
Unit I: Introduction
Syllabus: Data, Information, Information System, evolution, types based on functions and
hierarchy, Enterprise and functional information systems.
Data Vs Information
Data can be described as unprocessed facts and figures. Plain collected data as raw facts cannot
help in decision-making. However, data is the raw material that is organized, structured, and
interpreted to create useful information.
Data is defined as 'groups of non-random symbols in the form of text, images, voice
representing quantities, action and objects'. Data are only the raw facts, the material for
obtaining information.
Information is interpreted data; created from organized, structured, and processed data in a
particular context.
Knowledge is the human expertise stored in a person’s mind, gained through experience, and
interaction with the person’s environment. So information when combined with (a manager’s)
insight, experience and expertise, becomes knowledge with which stronger decisions can be
made.
Example: The number 36 is data. Knowing that 36 is my age is information. Information that
the data represents my age is essential knowledge that is key in an information system.
Intelligence It requires ability to sense the environment, to make decisions, and to control
action. It is a step ahead and uses information and knowledge to drive business decisions.
Intelligence is decision-support. It’s a tool for making predictions about the future in order to
take a course of action that improves outcomes.
Systems a system is a set of components (subsystems) that operate together to achieve certain
objectives.
Types of System
Physical or Abstract: Physical system is tangible entities that may be static or dynamic
in nature. Abstract system is conceptual or non-physical. The abstract is
conceptualization of physical situations.
Open and Closed: An open system continually interacts with its environment. It
receives input from the outside and delivers output to outside. A closed system is isolated
from environment influences.
Sub System and Super System: Each system is part of a large system. The business
firm is viewed as the system or total system when focus is on production, distribution of
goal and sources of profit and income. The total system consists of all the objects,
attributes and relationship necessary to accomplish an objective given a number of
constraints. Sub systems are the smaller systems within a system. Super system denotes
extremely large and complex system
Permanent and Temporary System: A permanent system is a system enduring for a
time span that is long relative to the operation of human. Temporary system is one
having a short time span.
Natural and Man Made System: System which is made by man is called man made
system. Systems which are in the environment made by nature are called natural system.
Deterministic and Probabilistic: A Deterministic system is one in which the occurrence
of all events is perfectly predictable. If we get the description of the system state at a
particular time, the next state can be easily predicted. Probabilistic system is one in
which the occurrence of events cannot be perfectly predicted.
Information System (IS)
An information system is the group of procedures and resources used to gather, store, process
and communicate the information needed in an organization.
Information systems use data stored in computer databases to provide needed information.
1. Information systems capture data from the organization (internal data) and its
environment (external data). They converts the data into a meaningful information.
2. They store the database items over an extensive period of time.
3. When specific information is needed, the appropriate data items are manipulated as
necessary, and the user receives the resulting information.
Computer Based Information System (CBIS) - It is the subset of the information system that
automates information management using computers.
Information Technology
Information technology falls under the IS umbrella but deals with the technology involved in the
systems themselves. Information technology can be defined as the study, design,
implementation, support or management of computer-based information systems.
IT typically includes hardware, software, databases and networks. Information technology often
governs the acquisition, processing, storage and dissemination of digitized information, or data,
generated through the disciplines of computing and telecommunications. Information technology
focuses on managing technology and improving its utilization to advance the overall business
goals.
Hardware: It refers to physical equipment used for input, output and processing. Hardware
refers to the computers themselves, along with any and all peripherals, including servers,
routers, monitors, printers and storage devices.
Software: The programs/ application program used to control and coordinate the hardware
components. It is used for analysing and processing of the data. These programs include a set
of instruction used for processing information .
Classification by Characteristics
Action vs. non action information: Action information is active information that causes
an activity or operation, while the information that communicates only when the status is
applied without any operation is called non-action.
Recurring vs. non recurring information: The information that is generated in regular
intervals is called recurring information, whereas non-repetitive in nature is called non-
recurring information.
Internal vs. external information: all information that produced from internal sources
of any organization is called internal information, though all information that produced
from external sources of any organization is called external information.
Classification by Hierarchy
Information used in business for decision-making is generally categorized into three types −
Strategic Information (Top level) − Strategic information is concerned with long term
policy decisions that defines the objectives of a business and checks how well these
objectives are met. For example, acquiring a new plant, a new product, diversification of
business etc, comes under strategic information.
Classification by Application
In terms of applications, information can be categorized as −
Planning Information − These are the information needed for establishing standard
norms and specifications in an organization. This information is used in strategic,
tactical, and operation planning of any activity. Examples of such information are time
standards, design standards.
Control Information − This information is needed for establishing control over all
business activities through feedback mechanism. This information is used for controlling
attainment, nature and utilization of important processes in a system. When such
information reflects a deviation from the established standards, the system should induce
a decision or an action leading to control.
Quality of Information
Information is a vital resource for the success of any organization. Future of an organization lies
in using and disseminating information wisely. Good quality information placed in right context
in right time tells us about opportunities and problems well in advance.
Good quality information − Quality is a value that would vary according to the users and uses of
the information. Let us generate a list of the most essential characteristic features for
information quality −
Reliability − It should be verifiable and dependable.
Timely − It must be current and it must reach the users well in time, so that important
decisions can be made in time.
Relevant − It should be current and valid information and it should reduce uncertainties.
Accurate − It should be free of errors and mistakes, true, and not deceptive.
Sufficient − It should be adequate in quantity, so that decisions can be made on its basis.
Unambiguous − It should be expressed in clear terms. In other words, in should be
comprehensive.
Complete − It should meet all the needs in the current context.
Unbiased − It should be impartial, free from any bias. In other words, it should have
integrity.
Explicit − It should not need any further explanation.
Comparable − It should be of uniform collection, analysis, content, and format.
Reproducible − It could be used by documented methods on the same data set to
achieve a consistent result.
Need & Objective of Information
Information processing beyond doubt is the dominant industry of the present century. Following
factors states few common factors that reflect on the needs and objectives of the information
processing −
Increasing impact of information processing for organizational decision making.
Pre-specified
Provide both Supports the Greater
Collects, stores, reports and
Interactive ad-hoc internal and creation, connectivity,
modifies and displays to
support for the external organization and higher level of
retrieve day-to- support
decision-making information dissemination of integration
day transactions business
process relevant to the business across
of an decision-
strategic goals of knowledge applications
organization making
the organization
During this period, the role of IS was mostly to perform activities like transaction
processing, recordkeeping and accounting. IS was mainly used for Electronic Data Processing
(EDP). EDP is described as the use of computers in recording, classifying, manipulating, and
summarizing data. It is also called information processing or automatic data processing.
Transaction Processing System (TPS) was the first computerized system developed to process
business data. TPS was mainly aimed at clerical staff of an organisation. The early TPS used
batch processing data which was accumulated over a period and all transactions were processed
afterward. TPS collects, stores, modifies and retrieves day-to-day transactions of an organization.
Usually, TPS computerize or automate an existing manual process to allow for faster processing,
improved customer service and reduced clerical costs.
Examples of outputs from TPS are cash deposits, automatic teller machine (ATM), payment
order and accounting systems. TPS is also known as transaction processing or real-time
processing.
1960 to 1970: Management Information Systems
During this era, the role of IS evolved from TPS to Management Information Systems (MIS).
MIS process data into useful informative reports and provide managers with the tools to organize
evaluate and efficiently manage departments within an organization. MIS delivers information in
the form of displays and pre-specified reports to support business decision-making. Examples of
output from MIS are cost trend, sales analysis and production performance reporting systems.
Usually, MIS generates three basic types of information which are:
This period also marked the development when the focus of organizations shifted slowly from
merely automating basic business processes to consolidating the control within the data
processing function.
This period gave rise to departmental computing due to many organisations purchasing their own
hardware and software to suit their departmental needs. Instead of waiting for indirect support of
centralized corporate service department, employees could use their own resources to support
their job requirements. This trend led to new challenges of data incompatibility, integrity and
connectivity across different departments. Further, top executives were neither using DSS nor
MIS hence executive information systems (EIS) or executive support systems (ESS) were
developed.
EIS offers decision making facilities to executives through providing both internal and external
information relevant to meeting the strategic goals of the organization. These are sometimes
considered as a specific form of DSS. Examples of the EIS are systems for easy access to actions
of all competitors, economic developments to support strategic planning and analysis of business
performance.
The Internet and related technologies and applications changed the way businesses operate and
people work. Information systems functions in this period are still the same just like 50 years ago
doing records keeping, reporting management, transactions processing, support management and
managing processes of the organization. It is used to support business process, decision making
and competitive advantage.
The difference is greater connectivity across similar and dissimilar system components. There is
great network infrastructure, higher level of integration of functions across applications and
powerful machines with higher storage capacity. Many businesses use Internet technologies and
web-enable business processes to create innovative e-business applications. E-business is simply
conducting business process using the internet.
Information Systems based on Functions & Hierarchy
As most organizations are hierarchical, the way in which the different classes of information
systems are categorized tends to follow the hierarchy. This is often described as "the pyramid
model" because the way in which the systems are arranged mirrors the nature of the tasks found
at various different levels in the organization.
For example,
Three level pyramid model based on the type of decisions taken at different levels in the organization
Basing the classification on the people who use the information system means that many of the
other characteristics such as the nature of the task and informational requirements, are taken into
account more or less automatically.
Four level pyramid model based on the different levels of hierarchy in the organization
Transaction Processing System (TPS)
Transaction processing systems are used to record day to day business transactions of the
organization. They are used by users at the operational management level. The main objective of
a transaction processing system is to answer routine questions such as;
How printers were sold today?
How much inventory do we have at hand?
What is the outstanding due for John Doe?
By recording the day to day business transactions, TPS system provides answers to the above
questions in a timely manner.
The decisions made by operational managers are routine and highly structured.
The information produced from the transaction processing system is very detailed.
For example, banks that give out loans require that the company that a person works for should
have a memorandum of understanding (MoU) with the bank. If a person whose employer has a
MoU with the bank applies for a loan, all that the operational staff has to do is verify the
submitted documents. If they meet the requirements, then the loan application documents are
processed. If they do not meet the requirements, then the client is advised to see tactical
management staff to see the possibility of signing a MoU.
Examples of transaction processing systems include;
Point of Sale Systems – records daily sales
Payroll systems – processing employees salary, loans management, etc.
Stock Control systems – keeping track of inventory levels
Airline booking systems – flights booking management
Management Information System (MIS)
Management Information Systems (MIS) are used by tactical managers to monitor the
organization's current performance status. The output from a transaction processing system is
used as input to a management information system.
The MIS system analyzes the input with routine algorithms i.e. aggregate, compare and
summarizes the results to produced reports that tactical managers use to monitor, control and
predict future performance.
For example, input from a point of sale system can be used to analyze trends of products that are
performing well and those that are not performing well. This information can be used to make
future inventory orders i.e. increasing orders for well-performing products and reduce the orders
of products that are not performing well.
Examples of management information systems include;
Sales management systems – they get input from the point of sale system
Budgeting systems – gives an overview of how much money is spent within the
organization for the short and long terms.
Human resource management system – overall welfare of the employees, staff
turnover, etc.
Tactical managers are responsible for the semi-structured decision. MIS systems provide the
information needed to make the structured decision and based on the experience of the tactical
managers, they make judgement calls i.e. predict how much of goods or inventory should be
ordered for the second quarter based on the sales of the first quarter.
GIS is more than just software. People and methods are combined with geospatial software and
tools, to enable spatial analysis, manage large datasets, and display information in a
map/graphical form.
A geographic information system (GIS) is a conceptualized framework that provides the
ability to capture and analyze spatial and geographic data. GIS applications (or GIS apps) are
computer-based tools that allow the user to create interactive queries (user-created searches),
store and edit spatial and non-spatial data, analyze spatial information output, and visually share
the results of these operations by presenting them as maps.
Geographic information systems are utilized in multiple technologies, processes, techniques and
methods. They are attached to various operations and numerous applications, that relate to:
engineering, planning, management, transport/logistics, insurance, telecommunications, and
business. For this reason, GIS and location intelligence applications are at the foundation of
location-enabled services, that rely on geographic analysis and visualization.
GIS provides the capability to relate begun to open new avenues of scientific
previously unrelated information, through inquiry and studies.
the use of location as the "key index
variable". Locations and extents that are
found in the Earth's spacetime, are able to be
recorded through the date and time of
occurrence, along with x, y, and
z coordinates;
representing, longitude (x), latitude (y),
and elevation (z). All Earth-based, spatial–
temporal, location and extent references,
should be relatable to one another, and
ultimately, to a "real" physical location or
extent. This key characteristic of GIS, has
ADVANTAGES DISADVANTAGES
Simple to use and understand The software is ready only after the last stage is over
Management simplicity thanks to its rigidity: every High risks and uncertainty
phase has a defined result and process review
Development stages go one by one Not the best choice for complex and object-oriented projects
ADVANTAGES DISADVANTAGES
Perfect for the small or mid-sized projects where Inappropriate for the long-term projects
requirements are clear and not equivocal
Easy to determine the key points in the development The progress of the stage is hard to measure while it is still in
cycle the development
Easy to classify and prioritize tasks Integration is done at the very end, which does not give the
option of identifying the problem in advance
Some functions can be quickly developed at the Iterative model requires more resources than the waterfall
beginning of the development lifecycle model
The progress is easy measurable Issues with architecture or design may occur because not all
the requirements are foreseen during the short planning stage
The shorter iteration is - the easier testing and Bad choice for the small projects
debugging stages are
It is easier to control the risks as high-risk tasks are The process is difficult to manage
completed first
Problems and risks defined within one iteration can be The risks may not be completely determined even at the final
prevented in the next sprints stage of the project
Flexibility and readiness to the changes in the Risks analysis requires involvement of the highly-qualified
requirements specialists
Lifecycle is divided into small parts, and if the risk Can be quite expensive
concentration is higher, the phase can be finished
earlier to address the treats
The development process is precisely documented yet The risk control demands involvement of the highly-
scalable to the changes skilled professionals
The scalability allows to make changes and add new Can be ineffective for the small projects
functionality even at the relatively late stages
The earlier working prototype is done - sooner users Big number of the intermediate stages requires
can point out the flaws excessive documentation
Every stage of V-shaped model has strict results so it’s easy to Lack of the flexibility
control
Testing and verification take place in the early stages Bad choice for the small projects
Good for the small projects, where requirements are static and clear Relatively big risks
Corrections of functional requirements are implemented into Difficulties with measuring the final cost because of
the development process to provide the competitiveness permanent changes
Project is divided by short and transparent iterations The team should be highly professional and client-
oriented
Risks are minimized thanks to the flexible change process New requirements may conflict with the existing
architecture
Fast release of the first product version With all the corrections and changes there is possibility
that the project will exceed expected time
6. Prototyping Model
It refers to the activity of creating prototypes of software applications, for example, incomplete
versions of the software program being developed. It is an activity that can occur in software
development and It used to visualize some component of the software to limit the gap of
misunderstanding the customer requirements by the development team. This also will reduce the
iterations may occur in the waterfall approach and hard to be implemented due to the inflexibility
of the waterfall approach. So, when the final prototype is developed, the requirement is
considered to be frozen.
The usage
This process can be used with any software developing life cycle model. While this shall be
chosen when you are developing a system has user interactions. So, if the system does not
have user interactions, such as a system does some calculations shall not have prototypes.
Advantages Disadvantages
Insufficient analysis. User confusion of prototype
Reduced time and costs, but this can be a and finished system.
disadvantage if the developer loses time in Developer misunderstanding of user objectives.
developing the prototypes. Excessive development time of the prototype.
Improved and increased user involvement. It is costly to implement the prototypes
System flowcharts are a way of displaying To illustrate this, symbols are used. They are
how data flows in a system and how connected together to show what happens to
decisions are made to control events. data and where it goes.
Note that system flow charts are very similar
to data flow charts. Data flow charts do not
include decisions, they just show the path
that data takes, where it is held, processed,
and then output.
The flowchart shows what the outcome is if the car is going too fast or too slow. The system is
designed to add fuel, or take it away and so keep the car's speed constant. The output (the car's
new speed) is then fed back into the system via the speed sensor.
Decision Tables
A Decision table represents conditions and the respective actions to be taken to address them, in
a structured tabular format.
It is a powerful tool to debug and prevent errors. It helps group similar information into a single
table and then by combining tables it delivers easy and convenient decision-making.
Decision tables are a method of describing the complex logical relationship in a precise manner
which is easily understandable.
It is useful in situations where the resulting actions depend on the occurrence of one or
several combinations of independent conditions.
It is a matrix containing row or columns for defining a problem and the actions.
To create the decision table, the developer must follow basic four steps:
Condition Stub − It is in the upper left quadrant which lists all the condition to be
checked.
Action Stub − It is in the lower left quadrant which outlines all the action to be carried
out to meet such condition.
Condition Entry − It is in upper right quadrant which provides answers to questions
asked in condition stub quadrant.
Action Entry − It is in lower right quadrant which indicates the appropriate action
resulting from the answers to the conditions in the condition entry quadrant.
The entries in decision table are given by Decision Rules which define the relationships
between combinations of conditions and courses of action. In rules section,
Y shows the existence of a condition.
N represents the condition, which is not satisfied.
A blank - against action states it is to be ignored.
X (or a check mark will do) against action states it is to be carried out.
For example, refer the following table −
Regular Customer - Y N -
ACTIONS
Give 5% discount X X - -
Give no discount - - X X
Decision Trees
Decision trees are a method for defining complex relationships by describing decisions and
avoiding the problems in communication. A decision tree is a diagram that shows alternative
actions and conditions within horizontal tree framework. Thus, it depicts which conditions to
consider first, second, and so on.
Decision trees depict the relationship of each condition and their permissible actions. A square
node indicates an action and a circle indicates a condition. It forces analysts to consider the
sequence of decisions and identifies the actual decision that must be made.
The major limitation of a decision tree is
that it lacks information in its format to
describe what other combinations of
conditions you can take for testing. It is a
single representation of the relationships
between conditions and actions.
For example, refer the following decision
tree −
Types of DFD
Logical DFD - This type of DFD concentrates on the system process, and flow of data in
the system.For example in a Banking software system, how data is moved between
different entities.
Physical DFD - This type of DFD shows how the data flow is actually implemented in
the system. It is more specific and close to the implementation.
The following table lists the points that differentiate a physical DFD from a logical DFD.
It provides low level details of hardware, software, It explains events of systems and data required by
files, and people. each event.
It depicts how the current system operates and how It shows how business operates; not how the system
a system will be implemented. can be implemented.
DFD Components
DFD can represent Source, destination, storage and flow of data using the following set of
components -
Entities - Entities are source and destination of information data. Entities are represented by a
rectangles with their respective names.
Process - Activities and action taken on the data are represented by Circle or Round-edged rectangles.
Data Storage - There are two variants of data storage - it can either be represented as a rectangle with
absence of both smaller sides or as an open-sided rectangle with only one side missing.
Data Flow - Movement of data is shown by pointed arrows. Data movement is shown from the base
of arrow as its source towards head of the arrow as destination.
Levels of DFD
Level 0 - Highest abstraction level DFD is Level 0 DFDs are also known as context
known as Level 0 DFD, which depicts the level DFDs.
entire information system as one diagram
concealing all the underlying details.
Level 1 - The Level 0 DFD is broken down into
more specific, Level 1 DFD. Level 1 DFD
depicts basic modules in the system and flow of
data among various modules. Level 1 DFD also
mentions basic processes and sources of
information.
Level 2 - At this level, DFD shows how data flows inside the modules mentioned in Level 1.
Higher level DFDs can be transformed into more specific lower level DFDs with deeper level of
understanding unless the desired level of specification is achieved.
DFD is easy to understand and quite effective when the required design is not clear and the user
wants a notational language for communication. However, it requires a large number of
iterations for obtaining the most accurate and complete solution.
The following table shows the symbols used in designing a DFD and their significance −
Entity-Relationship Model
Entity-Relationship model is a type of database model based on the notion of real world entities
and relationship among them. We can map real world scenario onto ER database model. ER
Model creates a set of entities with their attributes, a set of constraints and relation among them.
ER Model is best used for the conceptual design of database. ER Model can be represented as
follows :
Entity - An entity in ER Model is a real world being, which has some properties
called attributes. Every attribute is defined by its corresponding set of values,
called domain.
For example, Consider a school database. Here, a student is an entity. Student has
various attributes like name, id, age and class etc.
The ER model defines the conceptual view of a database. It works around real-world entities
and the associations among them. At view level, the ER model is considered a good option for
designing databases.
Entity
An entity can be a real-world object, either animate or inanimate, that can be easily identifiable.
For example, in a school database, students, teachers, classes, and courses offered can be
considered as entities. All these entities have some attributes or properties that give them their
identity.
An entity set is a collection of similar types of entities. An entity set may contain entities with
attribute sharing similar values. For example, a Students set may contain all the students of a
school; likewise a Teachers set may contain all the teachers of a school from all faculties. Entity
sets need not be disjoint.
Attributes
Entities are represented by means of their properties, called attributes. All attributes have
values. For example, a student entity may have name, class, and age as attributes.
There exists a domain or range of values that can be assigned to attributes. For example, a
student's name cannot be a numeric value. It has to be alphabetic. A student's age cannot be
negative, etc.
Relationship
The association among entities is called a relationship. For example, an employee works_at a
department, a student enrolls in a course. Here, Works_at and Enrolls are called relationships.
Relationship Set
A set of relationships of similar type is called a relationship set. Like entities, a relationship too
can have attributes. These attributes are called descriptive attributes.
An Entity–relationship model (ER model) describes the structure of a database with the help of
a diagram, which is known as Entity Relationship Diagram (ER Diagram). An ER model is a
design or blueprint of a database that can later be implemented as a database. The main
components of E-R model are: entity set and relationship set.
A simple ER Diagram:
In the following diagram we have two Student and College is many to one as a
entities Student and College and their college can have many students however a
relationship. The relationship between student cannot study in multiple colleges at
the same time. Student entity has attributes
such as Stu_Id, Stu_Name & Stu_Addr and
College entity has attributes such as Col_ID
& Col_Name.
Here are the geometric shapes and their meaning in an E-R Diagram. We will discuss these terms
in detail in the next section(Components of a ER Diagram) of this guide so don’t worry too much
about these terms now, just go through them once.
Components of a ER Diagram
1. Entity
An entity is an object or component of data. An entity is represented as rectangle in an ER
diagram.
For example: In the following ER diagram
we have two entities Student and College
and these two entities have many to one
relationship as many students study in a
single college. We will read more about
relationships later, for now focus on entities.
Weak Entity:
An entity that cannot be uniquely identified
by its own attributes and relies on the
relationship with other entity is called weak
entity. The weak entity is represented by a
double rectangle. For example – a bank
account cannot be uniquely identified
without knowing the bank to which the
account belongs, so bank account is a weak
entity.
2. Attribute
An attribute describes the property of an entity. An attribute is represented as Oval in an ER
diagram. There are four types of attributes:
1. Key attribute
2. Composite attribute
3. Multivalued attribute
4. Derived attribute
1. Key attribute:
A key attribute can uniquely identify an
entity from an entity set. For example,
student roll number can uniquely identify a
student from a set of students. Key attribute
is represented by oval same as other
attributes however the text of key attribute
is underlined.
2. Composite attribute:
An attribute that is a combination of other
attributes is known as composite attribute.
For example, In student entity, the student
address is a composite attribute as an
address is composed of other attributes such
as pin code, state, country.
3. Multivalued attribute:
An attribute that can hold multiple values is known as multivalued attribute. It is represented
with double ovals in an ER Diagram. For example – A person can have more than one phone
numbers so the phone number attribute is multivalued.
4. Derived attribute:
A derived attribute is one whose value is E-R diagram with multivalued and
dynamic and derived from another attribute. derived attributes:
It is represented by dashed oval in an ER
Diagram. For example – Person age is a
derived attribute as it changes over time and
can be derived from another attribute (Date
of birth).
3. Relationship
A relationship is represented by diamond shape in ER diagram, it shows the relationship among
entities. There are four types of relationships:
1. One to One 3. Many to One
2. One to Many 4. Many to Many
Generalization is a process in which the common attributes of more than one entities form a
new entity. This newly formed entity is called generalized entity.
Generalization Example
Lets say we have two entities Student and The ER diagram before generalization
Teacher. looks like this:
Attributes of Entity Student are: Name,
Address & Grade
Attributes of Entity Teacher are: Name,
Address & Salary
These two entities have two common attributes: Name and Address, we can make a generalized
entity with these common attributes. Lets have a look at the ER model after generalization.
OOAD - Object Oriented Analysis & Design
A Brief History
The object-oriented paradigm took its shape from the initial concept of a new programming
approach, while the interest in design and analysis methods came much later.
The first object–oriented language was Simula (Simulation of real systems) that was developed in
1960 by researchers at the Norwegian Computing Center.
In 1970, Alan Kay and his research group at Xerox PARK created a personal computer named
Dynabook and the first pure object-oriented programming language (OOPL) - Smalltalk, for
programming the Dynabook.
In the 1980s, Grady Booch published a paper titled Object Oriented Design that mainly presented a
design for the programming language, Ada. In the ensuing editions, he extended his ideas to a
complete object–oriented design method.
In the 1990s, Coad incorporated behavioral ideas to object-oriented methods.
The other significant innovations were Object Modelling Techniques (OMT) by James
Rumbaugh and Object-Oriented Software Engineering (OOSE) by Ivar Jacobson.
Object-Oriented Analysis
Object–Oriented Analysis (OOA) is the procedure of identifying software engineering
requirements and developing software specifications in terms of a software system’s object
model, which comprises of interacting objects.
The main difference between object-oriented analysis and other forms of analysis is that in
object-oriented approach, requirements are organized around objects, which integrate both data
and functions. They are modelled after real-world objects that the system interacts with. In
traditional analysis methodologies, the two aspects - functions and data - are considered
separately.
Grady Booch has defined OOA as, “Object-oriented analysis is a method of analysis that
examines requirements from the perspective of the classes and objects found in the vocabulary
of the problem domain”.
The primary tasks in object-oriented analysis (OOA) are −
Identifying objects
Organizing the objects by creating object model diagram
Defining the internals of the objects, or object attributes
Defining the behavior of the objects, i.e., object actions
Describing how the objects interact
The common models used in OOA are use cases and object models.
Object-Oriented Design
Object–Oriented Design (OOD) involves implementation of the conceptual model produced
during object-oriented analysis. In OOD, concepts in the analysis model, which are
technology−independent, are mapped onto implementing classes, constraints are identified and
interfaces are designed, resulting in a model for the solution domain, i.e., a detailed description
of how the system is to be built on concrete technologies.
The implementation details generally include −
Object-Oriented Programming
Object-oriented programming (OOP) is a programming paradigm based upon objects (having
both data and methods) that aims to incorporate the advantages of modularity and reusability.
Objects, which are usually instances of classes, are used to interact with one another to design
applications and computer programs.
The important features of object–oriented programming are −
Object
An object is a real-world element in an object–oriented environment that may have a physical or
a conceptual existence. Each object has −
Identity that distinguishes it from other objects in the system.
State that determines the characteristic properties of an object as well as the values of the properties
that the object holds.
Behavior that represents externally visible activities performed by an object in terms of changes in its
state.
Objects can be modelled according to the needs of the application. An object may have a
physical existence, like a customer, a car, etc.; or an intangible conceptual existence, like a
project, a process, etc.
Class
A class represents a collection of objects having same characteristic properties that exhibit
common behavior. It gives the blueprint or description of the objects that can be created from it.
Creation of an object as a member of a class is called instantiation. Thus, object is an instance of
a class.
The constituents of a class are −
A set of attributes for the objects that are to be instantiated from the class. Generally, different objects
of a class have some difference in the values of the attributes. Attributes are often referred as class
data.
A set of operations that portray the behavior of the objects of the class. Operations are also referred as
functions or methods.
UML
UML (Unified Modeling Language) is a standard language for specifying, visualizing, constructing,
and documenting the artifacts of software systems. UML was created by the Object Management
Group (OMG) and UML 1.0 specification draft was proposed to the OMG in January 1997. It was
initially started to capture the behavior of complex software and non-software system and now it has
become an OMG standard.
UML is a standard language for specifying, visualizing, constructing, and documenting the
artifacts of software systems.
UML was created by the Object Management Group (OMG) and UML 1.0 specification draft
was proposed to the OMG in January 1997.
OMG is continuously making efforts to create a truly industry standard.
UML stands for Unified Modeling Language.
UML is different from the other common programming languages such as C++, Java, COBOL, etc.
UML is a pictorial language used to make software blueprints.
UML can be described as a general purpose visual modeling language to visualize, specify, construct,
and document software system.
Although UML is generally used to model software systems, it is not limited within this boundary. It
is also used to model non-software systems as well. For example, the process flow in a
manufacturing unit, etc.
UML is not a programming language but tools can be used to generate code in various
languages using UML diagrams. UML has a direct relation with object oriented analysis and
design. After some standardization, UML has become an OMG standard.
Goals of UML
A picture is worth a thousand words, this idiom absolutely fits describing UML. Object-
oriented concepts were introduced much earlier than UML. At that point of time, there were no
standard methodologies to organize and consolidate the object-oriented development. It was
then that UML came into picture.
There are a number of goals for developing UML but the most important is to define some
general purpose modeling language, which all modelers can use and it also needs to be made
simple to understand and use.
UML diagrams are not only made for developers but also for business users, common people,
and anybody interested to understand the system. The system can be a software or non-software
system. Thus it must be clear that UML is not a development method rather it accompanies with
processes to make it a successful system.
In conclusion, the goal of UML can be defined as a simple modeling mechanism to model all
possible practical systems in today’s complex environment.
UML Diagrams
UML diagrams are the ultimate output of the entire discussion. All the elements, relationships
are used to make a complete UML diagram and the diagram represents a system.
The visual effect of the UML diagram is the most important part of the entire process. All the
other elements are used to make it complete.
UML includes the following nine diagrams, the details of which are described in the subsequent
chapters.
Class diagram
Object diagram
Use case diagram
Sequence diagram
Collaboration diagram
Activity diagram
State chart diagram
Deployment diagram
Component diagram
Unit III: Database Management Systems
Syllabus: DBMS – types and evolution, RDBMS, OODBMS, RODBMS, Data warehousing,
Data Mart, Data mining
Database - definition
A database is an organized collection of structured information, or data, typically stored
electronically in a computer system. A database is usually controlled by a database management
system (DBMS).
What is Database?
A database is a systematic collection of data. They support electronic storage and manipulation
of data. Databases make data management easy.
Together, the data and the DBMS, along with the applications that are associated with them, are
referred to as a database system, often shortened to just database.
Let us discuss a database example: Facebook. It needs to store, manipulate, and present data
related to members, their friends, member activities, messages, advertisements, and a lot more.
We can provide a countless number of examples for the usage of databases.
Types of Databases
Here are some popular types of databases.
1. Distributed databases:
A distributed database is a type of database that has contributions from the common database and
information captured by local computers. In this type of database system, the data is not in one
place and is distributed at various organizations.
2. Relational databases:
This type of database defines database relationships in the form of tables. It is also called
Relational DBMS, which is the most popular DBMS type in the market. Database example of the
RDBMS system include MySQL, Oracle, and Microsoft SQL Server database.
3. Object-oriented databases:
This type of computers database supports the storage of all data types. The data is stored in the
form of objects. The objects to be held in the database have attributes and methods that define
what to do with the data. PostgreSQL is an example of an object-oriented relational DBMS.
4. Centralized database:
It is a centralized location, and users from different backgrounds can access this data. This type
of computers databases store application procedures that help users access the data even from a
remote location.
5. Data warehouses:
Data Warehouse is to facilitate a single version of truth for a company for decision making and
forecasting. A Data warehouse is an information system that contains historical and commutative
data from single or multiple sources. Data Warehouse concept simplifies the reporting and
analysis process of the organization.
6. NoSQL databases:
NoSQL database is used for large sets of distributed data. There are a few big data performance
problems that are effectively handled by relational databases. This type of computers database is
very efficient in analyzing large-size unstructured data.
7. Graph databases:
A graph-oriented database uses graph theory to store, map, and query relationships. These kinds
of computers databases are mostly used for analyzing interconnections. For example, an
organization can use a graph database to mine data about customers from social media.
8. OLTP databases:
OLTP another database type which able to perform fast query processing and maintaining data
integrity in multi-access environments.
9. Personal database:
A personal database is used to store data stored on personal computers that are smaller and easily
manageable. The data is mostly used by the same department of the company and is accessed by
a small group of people.
10. Hierarchical:
This type of DBMS employs the “parent-child” relationship of storing data. Its structure is like a
tree with nodes representing records and branches representing fields. The windows registry used
in Windows XP is a hierarchical database example.
11. Network DBMS:
This type of DBMS supports many-to-many relations. It usually results in complex database
structures. RDM Server is an example of database management system that implements the
network model.
Some of the latest databases include
12. Open-source databases:
This kind of database stored information related to operations. It is mainly used in the field of
marketing, employee relations, customer service, of databases.
13. Cloud databases:
A cloud database is a database which is optimized or built for such a virtualized environment.
There are so many advantages of a cloud database, some of which can pay for storage capacity
and bandwidth. It also offers scalability on-demand, along with high availability.
14. Self-driving databases:
The newest and most groundbreaking type of database, self-driving databases (also known as
autonomous databases) are cloud-based and use machine learning to automate database
tuning, security, backups, updates, and other routine management tasks traditionally
performed by database administrators.
15. Multimodal database:
The multimodal database is a type of data processing platform that supports multiple data models
that define how the certain knowledge and information in a database should be organized and
arranged.
16. Document/JSON database:
In a document-oriented database, the data is kept in document collections, usually using the
XML, JSON, BSON formats. One record can store as much data as you want, in any data type
(or types) you prefer.
Database Components
DBMS
DBMS stands for Database Management System. We can break it like this DBMS =
Database + Management System.
A database management system stores data in such a way that it becomes easier to
retrieve, manipulate, and produce information. DBMS is a collection of inter-related data
and set of programs to store & access those data in an easy and effective manner.
Database software makes data management simpler by enabling users to store data in a
structured form and then access it. It typically has a graphical interface to help create and
manage the data and, in some cases, users can construct their own databases by using database
software.
In two-tier architecture, the Database system is present at the server machine and the DBMS
application is present at the client machine, these two machines are connected with each other
through a reliable network as shown in the above diagram.
Whenever client machine makes a request to access the database present at server using a query
language like sql, the server perform the request on the database and returns the result back to the
client. The application connection interface such as JDBC, ODBC are used for the interaction
between server and client.
3. Three tier architecture
In three-tier architecture, another layer is present between the client machine and server machine.
In this architecture, the client application doesn’t communicate directly with the database
systems present at the server machine, rather the client application communicates with server
application and the server application internally communicates with the database system present
at the server
.
DBMS – Three Level Architecture
DBMS Three Level Architecture Diagram
The design of a database at physical level is called physical schema, how the data stored in
blocks of storage is described at this level.
Design of database at logical level is called logical schema, programmers and database
administrators work at this level, at this level data can be described as certain types of data
records gets stored in data structures, however the internal details such as implementation of data
structure is hidden at this level (available at physical level).
Design of database at view level is called view schema. This generally describes end user
interaction with database systems.
DBMS Instance
Definition of instance: The data stored in database at a particular moment of time is called
instance of database. Database schema defines the variable declarations in tables that belong to a
particular database; the value of these variables at a moment of time is called the instance of that
database.
For example, lets say we have a single table student in the database, today the table has 100
records, so today the instance of the database has 100 records. Lets say we are going to add
another 100 records in this table by tomorrow so the instance of database tomorrow will have
200 records in table. In short, at a particular moment the data stored in database is called the
instance, that changes over time when we add or delete data from the database.
DBMS languages
Database languages are used to read, update and store data in a database. There are several such
languages that can be used for this purpose; one of them is SQL (Structured Query Language).
Types of DBMS languages:
111 Ashish 23
123 Saurav 22
169 Lester 24
234 Lou 26
Table: Course
123 Steve 29
367 Chaitanya 27
234 Ajeet 28
Course Table:
2. Relational Model
Network Database Model is same like Hierarchical Model, but the only difference is that it
allows a record to have more than one parent.
In this model, there is no need of parent to child association like the hierarchical model.
It replaces the hierarchical tree with a graph.
It represents the data as record types and one-to-many relationship.
This model is easy to design and understand.
5. Object Model
Object model stores the data in the form of objects, classes and inheritance.
This model handles more complex applications, such as Geographic Information System
(GIS), scientific experiments, engineering design and manufacturing.
It is used in File Management System.
It represents real world objects, attributes and behaviors.
It provides a clear modular structure.
It is easy to maintain and modify the existing code.
RDBMS Concepts
RDBMS stands for relational database management system. A relational model can be
represented as a table of rows and columns. A relational database has following major
components:
1. Table 5. Instance
2. Record or Tuple 6. Schema
3. Field or Column name or Attribute 7. Keys
4. Domain
1. Table
A table is a collection of data represented in rows and columns. Each table has a name in
database. For example, the following table “STUDENT” stores the information of students in
database.
Table: STUDENT
Alternate Key – Out of all candidate keys, only one gets selected as primary key, remaining
keys are known as alternate or secondary keys.
Composite Key – A key that consists of more than one attribute to uniquely identify rows (also
known as records & tuples) in a table is called composite key.
Foreign Key – Foreign keys are the columns of a table that points to the primary key of another
table. They act as a cross-reference between tables.
1 Definition RDBMS stands for Relational DataBase OODBMS stands for Object Oriented DataBase
Management System. Management System.
3 Data RDBMS handles simple data. OODBMS handles large and complex data.
Complexity
4 Term An entity refers to collection of similar An class refers to group of objects having common
items having same definition. relationships, behaviors and properties.
5 Data RDBMS handles only data. OODBMS handles both data and functions
Handling operating on that data.
7 Key A primary key identifies in object in a Object Id, OID represents an object uniquely in
table uniquely. group of objects.
Data Warehousing
A data warehouse is constructed by integrating data from multiple heterogeneous sources. It
supports analytical reporting, structured and/or ad hoc queries and decision making.
Understanding a Data Warehouse
A data warehouse is a database, which is kept separate from the organization's
operational database.
There is no frequent updating done in a data warehouse.
It possesses consolidated historical data, which helps the organization to analyze its
business.
A data warehouse helps executives to organize, understand, and use their data to take
strategic decisions.
Data warehouse systems help in the integration of diversity of application systems.
A data warehouse system helps in consolidated historical data analysis.
Financial services
Banking services
Consumer goods
Retail sectors
Controlled manufacturing
OLTP vs OLAP
It is an online transactional system. It manages OLAP is an online analysis and data retrieving
Process
database modification. process.
Method OLTP uses traditional DBMS. OLAP uses the data warehouse.
Table Tables in OLTP database are normalized. Tables in OLAP database are not normalized.
OLTP and its transactions are the sources of Different OLTP databases become the source of
Source
data. data for OLAP.
Parameters OLTP OLAP
OLTP database must maintain data integrity OLAP database does not get frequently modified.
Data Integrity
constraint. Hence, data integrity is not an issue.
Response time It’s response time is in millisecond. Response time in seconds to minutes.
The data in the OLTP database is always The data in OLAP process might not be
Data quality
detailed and organized. organized.
It helps to control and run fundamental It helps with planning, problem-solving, and
Usefulness
business tasks. decision support.
Complete backup of the data combined with OLAP only need a backup from time to time.
Back-up
incremental backups. Backup is not important compared to OLTP
It is used by Data critical users like clerk, Used by Data knowledge users like workers,
User type
DBA & Data Base professionals. managers, and CEO.
This kind of Database users allows thousands This kind of Database allows only hundreds of
Number of users
of users. users.
It helps to Increase user’s self-service and Help to Increase productivity of the business
Productivity
productivity analysts.
Challenge Data Warehouses historically have been a An OLAP cube is not an open SQL server data
development project which may prove costly warehouse. Therefore, technical knowledge and
Parameters OLTP OLAP
Data Mart
Data marts contain a subset of organization-wide data that is valuable to specific groups of
people in an organization. In other words, a data mart contains only those data that is specific to
a particular group. For example, the marketing data mart may contain only data related to items,
customers, and sales. Data marts are confined to subjects.
What is Data Mart?
A Data Mart is focused on a single functional area of an organization and contains a subset of
data stored in a Data Warehouse. A Data Mart is a condensed version of Data Warehouse and is
designed for use by a specific department, unit or set of users in an organization.
E.g., Marketing, Sales, HR or finance. It is often controlled by a single department in an
organization.
Data Mart usually draws data from only a few sources compared to a Data warehouse. Data
marts are small in size and are more flexible compared to a Data warehouse.
Data marts contain a subset of organization-wide data. This Data is valuable to a specific
group of people in an organization.
It is cost-effective alternatives to a data warehouse, which can take high costs to build.
Data Mart allows faster access of Data.
Data Mart is easy to use as it is specifically designed for the needs of its users. Thus a
data mart can accelerate business processes.
Data Marts needs less implementation time compare to Data Warehouse systems. It is
faster to implement Data Mart as you only need to concentrate the only subset of the data.
It contains historical data which enables the analyst to determine data trends.
Disadvantages of a Data Mart
Many a times enterprises create too many disparate and unrelated data marts without
much benefit. It can become a big hurdle to maintain.
Data Mart cannot provide company-wide data analysis as their data set is limited.
Definition A Data Warehouse is a large repository of A data mart is an only subtype of a Data
data collected from different organizations Warehouse. It is designed to meet the need of a
or departments within a corporation. certain user group.
Usage It helps to take a strategic decision. It helps to take tactical decisions for the business.
Objective The main objective of Data Warehouse is A data mart mostly used in a business division at
to provide an integrated environment and the department level.
coherent picture of the business at a point
in time.
Designing The designing process of Data Warehouse The designing process of Data Mart is easy.
is quite difficult.
May or may not use in a dimensional It is built focused on a dimensional model using a
model. However, it can feed dimensional start schema.
models.
Data Handling Data warehousing includes large area of Data marts are easy to use, design and implement
the corporation which is why it takes a as it can only handle small amounts of data.
long time to process it.
Focus Data warehousing is broadly focused all Data Mart is subject-oriented, and it is used at a
the departments. It is possible that it can department level.
even represent the entire company.
Data type The data stored inside the Data Warehouse Data Marts are built for particular user groups.
are always detailed when compared with Therefore, data short and limited.
data mart.
Subject-area The main objective of Data Warehouse is Mostly hold only one subject area- for example,
to provide an integrated environment and Sales figure.
coherent picture of the business at a point
in time.
Data storing Designed to store enterprise-wide decision Dimensional modeling and star schema design
data, not just marketing data. employed for optimizing the performance of
access layer.
Data type Time variance and non-volatile design are Mostly includes consolidation data structures to
strictly enforced. meet subject area's query and reporting needs.
Data value Read-Only from the end-users standpoint. Transaction data regardless of grain fed directly
from the Data Warehouse.
Scope Data warehousing is more helpful as it can Data mart contains data, of a specific department
bring information from any department. of a company. There are maybe separate data
marts for sales, finance, marketing, etc. Has
limited usage
Source In Data Warehouse Data comes from many In Data Mart data comes from very few sources.
sources.
Size The size of the Data Warehouse may range The Size of Data Mart is less than 100 GB.
from 100 GB to 1 TB+.
Implementation The implementation process of Data The implementation process of Data Mart is
time Warehouse can be extended from months restricted to few months.
to years.
Data Mining
Data Mining is defined as the procedure of extracting information from huge sets of data. In
other words, we can say that data mining is mining knowledge from data.
Data Transformation:
Data transformation operations would contribute toward the success of the mining process.
Smoothing: It helps to remove noise from the data.
Aggregation: Summary or aggregation operations are applied to the data. I.e., the weekly sales
data is aggregated to calculate the monthly and yearly total.
Generalization: In this step, Low-level data is replaced by higher-level concepts with the help
of concept hierarchies. For example, the city is replaced by the county.
Normalization: Normalization performed when the attribute data are scaled up o scaled down.
Example: Data should fall in the range -2.0 to 2.0 post-normalization.
Attribute construction: these attributes are constructed and included the given set of attributes
helpful for data mining.
The result of this process is a final data set that can be used in modeling.
Modeling
In this phase, mathematical models are used to determine data patterns.
Based on the business objectives, suitable modeling techniques should be selected for the
prepared dataset.
Create a scenario to test check the quality and validity of the model.
Run the model on the prepared dataset.
Results should be assessed by all stakeholders to make sure that model can meet data
mining objectives.
Evaluation:
In this phase, patterns identified are evaluated against the business objectives.
Results generated by the data mining model should be evaluated against the business
objectives.
Gaining business understanding is an iterative process. In fact, while understanding, new
business requirements may be raised because of data mining.
A go or no-go decision is taken to move the model in the deployment phase.
Deployment:
In the deployment phase, you ship your data mining discoveries to everyday business operations.
The knowledge or information discovered during data mining process should be made
easy to understand for non-technical stakeholders.
A detailed deployment plan, for shipping, maintenance, and monitoring of data mining
discoveries is created.
A final project report is created with lessons learned and key experiences during the
project. This helps to improve the organization's business policy.
Data Mining Techniques
1. Classification:
This analysis is used to retrieve important
and relevant information about data, and
metadata. This data mining method helps to
classify data in different classes.
2. Clustering:
Clustering analysis is a data mining
technique to identify data that are like each
other. This process helps to understand the
differences and similarities between the
data.
3. Regression:
Regression analysis is the data mining method of identifying and analyzing the relationship
between variables. It is used to identify the likelihood of a specific variable, given the
presence of other variables.
4. Association Rules:
This data mining technique helps to find the association between two or more Items. It
discovers a hidden pattern in the data set.
5. Outer detection:
This type of data mining technique refers to observation of data items in the dataset which do
not match an expected pattern or expected behavior. This technique can be used in a variety
of domains, such as intrusion, detection, fraud or fault detection, etc. Outer detection is also
called Outlier Analysis or Outlier mining.
6. Sequential Patterns:
This data mining technique helps to discover or identify similar patterns or trends in
transaction data for certain period.
7. Prediction:
Prediction has used a combination of the other data mining techniques like trends, sequential
patterns, clustering, classification, etc. It analyzes past events or instances in a right sequence
for predicting a future event.
Communications Data mining techniques are used in communication sector to predict customer behavior to offer
highly targetted and relevant campaigns.
Insurance Data mining helps insurance companies to price their products profitable and promote new
offers to their new or existing customers.
Education Data mining benefits educators to access student data, predict achievement levels and find
students or groups of students which need extra attention. For example, students who are weak
in maths subject.
Manufacturing With the help of Data Mining Manufacturers can predict wear and tear of production assets.
They can anticipate maintenance which helps them reduce them to minimize downtime.
Banking Data mining helps finance sector to get a view of market risks and manage regulatory
compliance. It helps banks to identify probable defaulters to decide whether to issue credit
cards, loans, etc.
Retail Data Mining techniques help retail malls and grocery stores identify and arrange most sellable
items in the most attentive positions. It helps store owners to comes up with the offer which
encourages customers to increase their spending.
Service Providers Service providers like mobile phone and utility industries use Data Mining to predict the
reasons when a customer leaves their company. They analyze billing details, customer service
interactions, complaints made to the company to assign each customer a probability score and
E-Commerce E-commerce websites use Data Mining to offer cross-sells and up-sells through their websites.
offers incentives.
One of the most famous names is Amazon, who use Data mining techniques to get more
customers into their eCommerce store.
Super Markets Data Mining allows supermarket's develope rules to predict if their shoppers were likely to be
expecting. By evaluating their buying pattern, they could find woman customers who are most
likely pregnant. They can start targeting products like baby powder, baby shop, diapers and so
Crime Data
on. Mining helps crime investigation agencies to deploy police workforce (where is a crime
Investigation most likely to happen and when?), who to search at a border crossing etc.
Bioinformatics Data Mining helps to mine biological data from massive datasets gathered in biology and
medicine.
Unit IV - Integrated Systems, Security and Control
Syllabus: Knowledge based decision support systems, Integrating social media and mobile
technologies in Information system, Security, IS Vulnerability, Disaster Management, Computer
Crimes, Securing the Web.
Knowledge-Based Decision Support System
Implementing knowledge-based decision support system is one of the best ways to capture,
process and store and share knowledge among employees. The information can be easily
accessed by the user to resolve a variety of problems, issues or concerns.
Before the development of knowledge-driven DSS, employees with high intellect had to
perform knowledge-intensive tasks. An expert in a particular area would know how to
approach a problem and go about it. Similarly, knowledge-based DSS asks relevant
questions, offers suggestions and gives advice to solve a problem. The only difference is that
it’s automated and speeds up the whole process.
Before we dig deeper, let’s learn about few important terms and concepts used alongside
knowledge-drive decision support system. This will help gain an in-depth understanding of
such support systems.
It’s important to be familiar with technical jargons that experts in this field use, in order to
gain a deeper understanding of knowledge-based DSS.
Development Stages
Social media is also used for crowdsourcing. That's the practice of using social networking to
gather knowledge, goods or services. Companies use crowdsourcing to get ideas from
employees, customers and the general public for improving products or developing future
products or services.
Examples of business applications of social media include the following:
Social media analytics. This is the practice of gathering and analyzing data from blogs
and social media websites to assist in making business decisions. The most common use
of social media analytics is to do customer sentiment analysis.
Social media marketing (SMM). This application increases a company's brand exposure
and customer reach. The goal is to create compelling content that social media users will
share with their social networks. A key components of SMM is social media optimization
(SMO). SMO is a strategy for drawing new visitors to a website. Social media links and
share buttons are added to content and activities are promoted via status updates, tweets
and blogs.
Social customer relationship marketing. Social CRM is a powerful business tool. For
example, a Facebook page lets people who like a company's brand to like the business's
page. This, in turn, creates ways to communicate, market and network. Social media sites
give users the option to follow conversations about a product or brand to get real-time
market data and feedback.
Recruiting. Social recruiting has become a key part of employee recruitment strategies.
It is a fast way to reach a lot of potential candidates, both active job seekers and people
who were not thinking about a job change until they say the recruitment post.
Enterprise social networking. Businesses also use enterprise social networking to
connect people who share similar interests or activities. Public social media platforms let
organizations stay close to customers and make it easy to conduct market research.
Benefits of social media
Social media provides several benefits, including the following:
User visibility. Social platforms let people easily communicate and exchange ideas or
content.
Business and product marketing. These platforms enable businesses to quickly publicize
their products and services to a broad audience. Businesses can also use social media to
maintain a following and test new markets. In some cases, the content created on social
media is the product.
Audience building. Social media helps entrepreneurs and artists build an audience for their
work. In some cases, social media has eliminated the need for a distributor, because anyone
can upload their content and transact business online.
Mobile Technologies
Mobile technology is a type of technology in which a user utilizes a mobile phone to perform
communications-related tasks, such as communicating with friends, relatives, and others. It is
used to send data from one system to another. Portable two-way communications systems,
computing devices, and accompanying networking equipment make up mobile technology.
Definition: Any gadget with internet capabilities that can be accessed from anywhere is
referred to as mobile technology. Smartphones, tablets, some iPods, and laptops already fall
within this category, but this list will undoubtedly grow in the future years.
Your website and social media should work together seamlessly. This helps promote your brand
while boosting traffic to your social media accounts.
These are the social share buttons you see at the bottom of most blog posts. They sometimes also
appear at the top. They help increase awareness of your content, while also giving your readers a
seamless way to share your content. The improved user experience will be a boon to your
website.
One great way to spruce up your website while integrating social media is by including a feed of
social media posts on your pages. These are typically live feeds of your social media posts.
However, you can also use a branded hashtag in order to showcase a feed of posts from your
followers and fans.
iii. Create a social login option
Have you ever gone to a website that allowed you to login using your Google, Facebook, or
Twitter account? Those are great examples of social logins!
It’s so much easier to use a social media account to login than creating an entirely new profile,
picking a password, and confirming it on your email—only to have to log in again when you’re
done. Instead, it’s just a few clicks at most and you’re in.
Not only is this a great way to integrate social media on your website, but it’s also the way most
people prefer to login. In fact, one study found that 73% of users prefer to login using their social
media accounts.
Placing social share plugins on product pages of an e-commerce website is a trending practice.
This is because it helps generate social presence and conversation about the product on the
social media channel.
A social share plugin allows the e-commerce store to place social buttons on a product page.
These buttons when used by a website visitor allow him to share the product page on the
chosen social channel. Among the many other displayed social media feeds on website, the
product page will also be shared successfully.
The social share plugins must be placed in close proximity to the image of the product or
services to enable the website visitor to ‘see easily’ and share it quickly.
Embedding a social media feed widget on a website is one of the most lucrative approaches for
displaying your social media hashtag feed.
A social media widget is capable of displaying all your social media feeds together at one
place. The brand generated posts and fan-created content (user-generated content) are all
aggregated and extracted by this tool into visually creative social media feeds. Social widget
work par excellently for corporate events, product launch events, promotional influencer
meets, weddings, brand activations and more.
You can collect & curate social media content on your website but the highlight is that you can
make your social media content SHOPPABLE. Using the visual commerce solutions from
Taggbox, you can tag products into you social media posts or UGC posts and add buying
option and embed it as Shoppable social media gallery on website. This will help you increase
your possibilities of conversions, build brand reliability & trust, boost engagement, and deliver
a peerless shopping experience.
Videos grab user attention and let the message sink into his mind. Social media videos are
simply videos that are created and shared on social channels. Such videos provide an easy
social media integration on website. Thus you can embed the social media feed of your videos
on your website effortlessly.
Commenting tools encourage conversations and allow for human-to-human interaction thus
creating a strong authenticity and reliability of the information.
These ‘commenting systems’ are designed in a way that requires commenters to sync one of
their social media accounts for commenting. This way only a genuine profile is linked to your
identity as a commenter and it helps reduce the presence of trolls and spams online.
Showcasing social proof is simply integrating social reviews and recommendations of your
former customers on your website for your potential buyers and website visitors to see. Social
proof has been said to be trusted by 79% of customers and it also helps augment sales and
website conversions. You can also ask your audience to post a review using your hashtag and
then this hashtag can be used to create a display a live hashtag feed across any marketing
touchpoint!
Social media integration strategies for email marketing
You can integrate social media into your emails. Doing so will allow your readers to easily and
quickly find your social accounts and follow you.
The main ways to integrate mobile into your marketing mix include:
Developing a mobile friendly website is at the core of a mobile marketing strategy. Make your
web presence more mobile friendly or better able to accommodate content specific to a
campaign.
The following table outlines the usages of a mobile optimized web site, the benefits, what to
measure and how to measure the outcomes.
Uses Benefits
• Creating mobile landing pagesfor • Improves engagement
campaigns • Reduces scrolling andslow downloads
• Aiding mobile search results, • Creates hyper local marketingopportunities
especially if mobile site is content
• Enables campaign integrationwith other tactics or mediums
specific
• Is quantifiable—can be trackedthrough analytic programs
• Adopting location based point of
sale or “instant purchase” • Provides ability to reach customers‘on-the-go’ and create highly
opportunities effective two-way communication
• Engaging in mobile commerce • Provides single greatest returnon investment
Texting (SMS/MMS)
Text messaging (SMS) and multimedia messaging (MMS) on feature and smartphones have
been well received. They are widely used, not only for personal use, but also in business,
entertainment and education. Texting is twice as popular as browsing or apps. SMS/MMS
marketing represents a more selective and therefore cost-effective opportunity for either
driving traffic or engaging response. The basis of SMS marketing is to make an appealing
offer; it can be a powerful direct response tool with many applications.
Uses Benefits
• Customer text reminders • Is quick and easy to implement
• Links to mobile coupons, contests • Enables timely and relevantexchange of ideas
• Sale notifications • Is convenient
• Last minute alert offers • Has a broad reach
• Transactions—placing orders, or • 90% of devices support SMS
donating to charity, by text • Creates loyalty or a “following”
• Product/service support • Has CRM capability
• Appointment confirmations
• Mobile surveys, polls
Using SMS/MMS technology offered through smart phones to reach a wider audience.
Uses Benefits
• Productivity • Can be highly targeted
• Entertainment • Enable easy interaction and communications to engage customers
• Utilities (e.g. weather app) and build deeper relationships
• Social share communications • Encourage brand recognition
• Location Search GPS • Offer unique leading edge tools
• Mobile commerce
Uses Benefits
• Reaching mobile audience throughbanner • Provides visual display to highlightspecific campaigns
ads that sit above mobile site content • Delivers personal and locationrelevant messages
• Working in conjunction with a lead • Drives response andbrand relationships
generation landing page
• Banner placement “above the fold”offers the best
• Offering local coupons or specialsin exposure for the advertiser and can quickly engagethe
some form, either in your ads, on your mobile user if the message is targeted to them
site or elsewhere, to drawin customers
Uses Benefits
QR Code Usages: • Afford more opportunity for sharing and
• Addition to business cards for more contact building community
detail and directlinks to social media • Bridge the gap between online andoffline media –
• Inclusion in print ad materials toexpand on product e.g. provide more details than can fit on print
or services materials
• Addition to direct mail, event materials for more • Enable quick call to action – e.g. while store is
detailed info closed, passersby can still get information or
receivediscount offers
• Placement in storefront windows
• All data collected is reported via an analytics
• Placement on premium itemsSnap Tag Usages:
dashboard andrecorded as attributes in a
• Each code ring position can open up to different consumer database which helps in your CRM
campaigns, response requests or unique reporting efforts
data.
• SnapTags can be used across different media
platforms to delivermarketing campaigns customized
for different consumer segments.
Location Based Service/Geotargeting Mobile Advertising
Geotargeting is a way for your mobile or website to display content-specific information
depending on the location of the user. Through a mobile network that uses geographical
positioning on mobile devices, you can target your marketing by behaviour, knowing where
your prospective target is located, and make offers and calls to action accordingly.
Uses Benefits
• Promoting events • Very effective for attracting
• Responding to user requests forthe nearest service or impulse buyers
business, e.g restaurant, ATM machine • Provides convenience and thusbuilds
• Participating in apps that provide users loyalty
navigation toany street address • Ties in nicely with the global
• Making location-based offers to users of apps that allow positioning software on most smart
them to locate “friends” on a map via mobile and feature phones
• Placing mobile ads that appear only when mobile • Location target ads offer more
users are in aparticular area opportunities to reach clients
Uses Benefits
• Promoting products or services on mobile • Real-time apps improvecustomer satisfaction
messaging channels, including email, text • Knowing where your users are can help you optimize
messaging,mobile app push notifications, local dealsand shipping logistics
QR Code scanning, and social networks
• Catches impulse buyers
• Texting SMS to deliver product or
• Allows for instant customerservice interaction
promotional information alerts, track
inquiries and receive order status updates • Builds consumer loyalty throughadded convenience
Security
All sorts of data whether it is government, corporate, or personal need high security; however,
some of the data, which belongs to the government defense system, banks, defense research and
development organization, etc. are highly confidential and even small amount of negligence to
these data may cause great damage to the whole nation. Therefore, such data need security at a
very high level.
The protection of information systems against unauthorized access to or modification of
information, whether in storage, processing or transit, and against the denial of service to
authorized users, including those measures necessary to detect, document, and counter such
threats.
Definition: Information systems security refers to the processes and methodologies involved
with keeping information confidential, available and assuring its integrity.
Information systems security does not just deal with computer information, but also protecting
data and information in all of its forms.
Objectives of Information System Security
There are 3 main objectives, commonly known as CIA – Confidentiality, Integrity and
Availability. It is also called as the Information Security Triad (CIA).
Once a user has been authenticated, the next step is to ensure that they can only access the
information resources that are appropriate. This is done through the use of access control. Access
control determines which users are authorized to read, modify, add, and/or delete information.
Several different access control models exist.
iii) Encryption
Many times, an organization needs to transmit information over the Internet or transfer it on
external media such as a CD or flash drive. In these cases, even with proper authentication and
access control, it is possible for an unauthorized person to get access to the data.
Encryption is a process of encoding data upon its transmission or storage so that only authorized
individuals can read it. This encoding is accomplished by a computer program, which encodes
the plain text that needs to be transmitted; then the recipient receives the cipher text and decodes
it (decryption). In order for this to work, the sender and receiver need to agree on the method of
encoding so that both parties can communicate properly. Both parties share the encryption key,
enabling them to encode and decode each other’s messages. This is called symmetric key
encryption. This type of encryption is problematic because the key is available in two different
places.
iv) Password Security
It turns out that this single-factor authentication is extremely easy to compromise. Good
password policies must be put in place in order to ensure that passwords cannot be compromised.
Below are some of the more common policies that organizations should put in place.
Require complex passwords.
Change passwords regularly.
Train employees not to give away passwords.
v) Backups
Another essential tool for information security is a comprehensive backup plan for the entire
organization. Not only should the data on the corporate servers be backed up, but individual
computers used throughout the organization should also be backed up. A good backup plan
should consist of several components.
A full understanding of the organizational information resources.
Regular backups of all data. Critical data should be backed up daily, while less critical
data could be backed up weekly.
Offsite storage of backup data sets. It is essential that part of the backup plan is to store
the data in an offsite location.
Test of data restoration. This will ensure that the process is working and will give the
organization confidence in the backup plan.
Additional concepts related to backup include the following:
Universal Power Supply (UPS). A UPS is a device that provides battery backup to
critical components of the system, allowing them to stay online longer and/or allowing
the IT staff to shut them down using proper procedures in order to prevent the data loss
that might occur from a power failure.
vi) Firewalls
Another method that an organization should use to increase security on its network is a firewall.
A firewall can exist as hardware or software (or both).
A hardware firewall is a device that is connected to the network and filters the packets based on a
set of rules.
A software firewall runs on the operating system and intercepts packets as they arrive to a
computer.
A firewall protects all company servers and computers by stopping packets from outside the
organization’s network that do not meet a strict set of criteria. A firewall may also be configured
to restrict the flow of packets leaving the organization. This may be done to eliminate the
possibility of employees watching YouTube videos or using Facebook from a company
computer.
vii) Intrusion Detection Systems
Another device that can be placed on the network for security purposes is an intrusion detection
system, or IDS. An IDS does not add any additional security; instead, it provides the
functionality to identify if the network is being attacked. An IDS can be configured to watch for
specific types of activities and then alert security personnel if that activity occurs.
viii) Sidebar: Virtual Private Networks
A VPN allows a user who is outside of a corporate network to take a detour around the firewall
and access the internal network from the outside. Through a combination of software and
security measures, this lets an organization allow limited access to its networks while at the same
time ensuring overall security.
Physical Security
Physical security is the protection of the actual hardware and networking components that store
and transmit information resources. To implement physical security, an organization must
identify all of the vulnerable resources and take measures to ensure that these resources cannot
be physically tampered with or stolen. These measures include the following.
Locked doors: It may seem obvious, but all the security in the world is useless if an
intruder can simply walk in and physically remove a computing device. High-value
information assets should be secured in a location with limited access.
Physical intrusion detection: High-value information assets should be monitored through
the use of security cameras and other means to detect unauthorized access to the physical
locations where they exist.
Secured equipment: Devices should be locked down to prevent them from being stolen.
One employee’s hard drive could contain all of your customer information, so it is
essential that it be secured.
Environmental monitoring: An organization’s servers and other high-value equipment
should always be kept in a room that is monitored for temperature, humidity, and airflow.
The risk of a server failure rises when these factors go out of a specified range.
Employee training: One of the most common ways thieves steal corporate information is
to steal employee laptops while employees are traveling. Employees should be trained to
secure their equipment whenever they are away from the office.
Security Policies
Besides the technical controls listed above, organizations also need to implement security
policies as a form of administrative control. In fact, these policies should really be a starting
point in developing an overall security plan. A good information-security policy lays out the
guidelines for employee use of the information resources of the company and provides the
company recourse in the case that an employee violates a policy.
A security policy should be based on the guiding principles of confidentiality, integrity, and
availability.
A good example of a security policy that many will be familiar with is a web use policy. A web
use policy lays out the responsibilities of company employees as they use company resources to
access the Internet. A security policy should also address any governmental or industry
regulations that apply to the organization.
Cyber Security
Cyber security is a well-designed technique to protect computers, networks, different programs,
personal data, etc., from unauthorized access.
How to Secure Data?
Let us now discuss how to secure data. In order to make your security system strong, you need
to pay attention to the following −
Security Architecture
Network Diagram
Security Assessment Procedure
Security Policies
Risk Management Policy
Backup and Restore Procedures
Disaster Recovery Plan
Risk Assessment Procedures
Once you have a complete blueprint of the points mentioned above, you can put better security
system to your data and can also retrieve your data if something goes wrong.
Let's now look at some of the threats that information system face
Computer Viruses – these are malicious programs as described in the above section. The
threats posed by viruses can be eliminated or the impact minimized by using Anti-Virus software
and following laid down security best practices of an organization.
Data Loss – if the data center caught fire or was flooded, the hardware with the data can be
damaged, and the data on it will be lost. As a standard security best practice, most organizations
keep backups of the data at remote places. The backups are made periodically and are usually put
in more than one remote area.
Biometric Identification – this is now becoming very common especially with mobile devices
such as smartphones. The phone can record the user fingerprint and use it for authentication
purposes. This makes it harder for attackers to gain unauthorized access to the mobile device.
Such technology can also be used to stop unauthorized people from getting access to your
devices.
Hacking - Hacking refers to the misuse of devices like computers, smartphones, tablets, and
networks to cause damage to or corrupt systems, gather information on users, steal data and
documents, or disrupt data-related activity.
Types of hackers
In general computer parlance, they are classified as white hats, black hats and grey hats hackers.
White hat hackers hack to check their own security systems to make it more hack-proof.
Black hat hackers hack to take control over the system for personal gains. They can destroy,
steal or even prevent authorized users from accessing the system.
Grey hat hackers are curious people who have just about enough computer language skills to
enable them to hack a system to locate potential loopholes
What is Ethical Hacking?
Ethical Hacking is identifying weakness in computer systems and/or computer networks and
coming with countermeasures that protect the weaknesses. Ethical hackers must abide by the
following rules.
Get written permission from the owner of the computer system and/or computer
network before hacking.
Protect the privacy of the organization been hacked.
Transparently report all the identified weaknesses in the computer system to the
organization.
Inform hardware and software vendors of the identified weaknesses.
Why Ethical Hacking?
Information is one of the most valuable assets of an organization. Keeping information
secure can protect an organization’s image and save an organization a lot of money.
Hacking can lead to loss of business for organizations that deal in finance such as PayPal.
Ethical hacking puts them a step ahead of the cyber criminals who would otherwise lead
to loss of business.
What is Security Testing?
Security testing is a testing technique to determine if an information system protects data and
maintains functionality as intended. It also aims at verifying 6 basic principles as listed below:
Confidentiality Authorization
Integrity Availability
Authentication Non-repudiation
Security Testing is a type of Software Testing that uncovers vulnerabilities, threats, risks in a
software application and prevents malicious attacks from intruders. The purpose of Security
Tests is to identify all possible loopholes and weaknesses of the software system which might
result in a loss of information, revenue, repute at the hands of the employees or outsiders of the
Organization.
The goal of security testing is to identify the threats in the system and measure its potential
vulnerabilities, so the system does not stop functioning or is exploited. It also helps in
detecting all possible security risks in the system and help developers in fixing these
problems through coding.
Types of Security Testing:
There are seven main types of security testing as per Open Source Security Testing methodology
manual. They are explained as follows:
Vulnerability Scanning: This is done through automated software to scan a system
against known vulnerability signatures.
Security Scanning: It involves identifying network and system weaknesses, and later
provides solutions for reducing these risks. This scanning can be performed for both
Manual and Automated scanning.
Penetration testing: This kind of testing simulates an attack from a malicious hacker.
This testing involves analysis of a particular system to check for potential vulnerabilities
to an external hacking attempt.
Risk Assessment: This testing involves analysis of security risks observed in the
organization. Risks are classified as Low, Medium and High. This testing recommends
controls and measures to reduce the risk.
Security Auditing: This is an internal inspection of Applications and Operating systems
for security flaws. An audit can also be done via line by line inspection of code
Ethical hacking: It's hacking an Organization Software systems. Unlike malicious
hackers, who steal for their own gains, the intent is to expose security flaws in the
system.
Posture Assessment: This combines Security scanning, Ethical Hacking and Risk
Assessments to show an overall security posture of an organization.
Vulnerability
Vulnerability is any mistakes or weakness in the system security procedures, design,
implementation or any internal control that may result in the violation of the system's security
policy.
A flaw or weakness in a system's design, implementation, or operation and management that
could be exploited to violate the system's security policy
Vulnerability—Weakness in an information system, system security procedures, internal
controls, or implementation that could be exploited by a threat source
Vulnerabilities are related to:
physical environment of the system hardware
the personnel software
management communication equipment and
administration procedures and facilities
security measures within the peripheral devices
organization their combinations.
business operation and service
delivery
It is evident that a pure technical approach cannot even protect physical assets: one should have
administrative procedure to let maintenance personnel to enter the facilities and people with
adequate knowledge of the procedures, motivated to follow it with proper care.
Four examples of vulnerability exploits:
an attacker finds and uses an overflow weakness to install malware to export sensitive
data;
an attacker convinces a user to open an email message with attached malware;
an insider copies a hardened, encrypted program onto a thumb drive and cracks it at
home;
a flood damages one's computer systems installed at ground floor.
Classification of Vulnerabilities
Vulnerabilities are classified according to the asset class they are related to:
Systems Down Time Disaster type Preventions Solution strategy Recover fully
Payroll Server We take Restore the backups in Fix the primary server and
8 hours
system damaged backup daily the Backup Server restore up to date data
You should prepare a list of all contacts of your partners and service providers, like ISP contact
and data, license that you have purchased and where they are purchased. Documenting all your
Network which should include IP schemas, usernames and password of servers.
Preventive steps to be taken for Disaster Recovery
The server room should have an authorized level. For example: only IT personnel should
enter at any given point of time.
In the server room there should be a fire alarm, humidity sensor, flood sensor and a
temperature sensor.
At the server level, RAID systems should always be used and there should always be a
spare Hard Disk in the server room.
You should have backups in place, this is generally recommended for local and off-site
backup, so a NAS should be in your server room.
Backup should be done periodically.
The connectivity to internet is another issue and it is recommended that the headquarters
should have one or more internet lines. One primary and one secondary with a device
that offers redundancy.
If you are an enterprise, you should have a disaster recovery site which generally is
located out of the city of the main site. The main purpose is to be as a stand-by as in any
case of a disaster, it replicates and backs up the data.
What is Recovery Testing?
Recovery testing verifies the system's ability to recover from points of failure like
software/hardware crashes, network failures etc. The purpose of Recovery Testing is to
determine whether operations can be continued after a disaster or after the integrity of the system
has been lost. It involves reverting to a point where the integrity of the system was known and
then reprocessing transactions up to the point of failure.
Recovery Testing Example
When an application is receiving data from the network, unplug the connecting cable.
After some time, plug the cable back in and analyze the application’s ability to continue
receiving data from the point at which the network connection was broken.
Restart the system while a browser has a definite number of sessions open and check
whether the browser is able to recover all of them or not
In Software Engineering, Recoverability Testing is a type of Non- Functional Testing. (Non-
functional testing refers to aspects of the software that may not be related to a specific function
or user action such as scalability or security.)
The time taken to recover depends upon:
The number of restart points
A volume of the applications
Training and skills of people conducting recovery activities and tools available for
recovery.
When there are a number of failures then instead of taking care of all failures, the recovery
testing should be done in a structured fashion which means recovery testing should be carried out
for one segment and then another.
It is done by professional testers. Before recovery testing, adequate backup data is kept in secure
locations. This is done to ensure that the operation can be continued even after a disaster.
Life Cycle of Recovery Process
The life cycle of the recovery process can be classified into the following five steps:
1. Normal operation
2. Disaster occurrence
3. Disruption and failure of the
operation
4. Disaster clearance through the
recovery process
5. Reconstruction of all processes and
information to bring the whole
system to move to normal operation
Let’s discuss these 5 steps in detail-
1. A system consisting of hardware, software, and firmware integrated to achieve a
common goal is made operational for carrying out a well-defined and stated goal. The
system is called to perform the normal operation to carry out the designed job without
any disruption within a stipulated period of time.
2. A disruption may occur due to malfunction of the software, due to various reasons
like input initiated malfunction, software crashing due to hardware failure, damaged
due to fire, theft, and strike.
3. Disruption phase is a most painful phase which leads to business losses, relation
break, opportunity losses, man-hour losses and invariably financial and goodwill
losses. Every sensible agency should have a plan for disaster recovery to enable the
disruption phase to be minimal.
4. If a backup plan and risk mitigation processes are at the right place before
encountering disaster and disruption, then recovery can be done without much loss of
time, effort and energy. A designated individual, along with his team with the
assigned role of each of these persons should be defined to fix the responsibility and
help the organization to save from long disruption period.
5. Reconstruction may involve multiple sessions of operation to rebuild all folders along
with configuration files. There should be proper documentation and process of
reconstruction for correct recovery.
Restoration Strategy
The recovery team should have their unique strategy for retrieving the important code and
data to bring the operation of the agency back to normalcy.
The strategy can be unique to each organization based on the criticality of the systems they
are handling.
The possible strategy for critical systems can be visualized as follows:
1. To have a single backup or more than one
2. To have multiple back-ups at one place or different places
3. To have an online backup or offline backup
4. Can the backup is done automatically based on a policy or to have it manually?
5. To have an independent restoration team or development team itself can be utilized
for the work
Each of these strategies has cost factor associated with it and multiple resources required for
multiple back-ups may consume more physical resources or may need an independent team.
Many companies may be affected due to their data and code dependency on the concerned
developer agency. For instance, if Amazon AWS goes down its shuts 25 of the internet.
Independent Restoration is crucial in such cases.
How to do Recovery Testing
While performing recovery testing following things should be considered.
We must create a test bed as close to actual conditions of deployment as possible.
Changes in interfacing, protocol, firmware, hardware, and software should be as close
to the actual condition as possible if not the same condition.
Through exhaustive testing may be time-consuming and a costly affair, identical
configuration, and complete check should be performed.
If possible, testing should be performed on the hardware we are finally going to
restore. This is especially true if we are restoring to a different machine than the one
that created the backup.
Some backup systems expect the hard drive to be exactly the same size as the one the
backup was taken from.
Obsolescence should be managed as drive technology is advancing at a fast pace, and
old drive may not be compatible with the new one. One way to handle the problem is
to restore to a virtual machine. Virtualization software vendors like VMware Inc. can
configure virtual machines to mimic existing hardware, including disk sizes and other
configurations.
Online backup systems are not an exception for testing. Most online backup service
providers protect us from being directly exposed to media problems by the way they
use fault-tolerant storage systems.
While online backup systems are extremely reliable, we must test the restore side of
the system to make sure there are no problems with the retrieval functionality,
security or encryption.
Testing procedure after restoration
Most large corporations have independent auditors to perform recovery test exercises
periodically.
The expense of maintaining and testing a comprehensive disaster recovery plan can be
substantial, and it may be prohibitive for smaller businesses.
Smaller risks may rely on their data backups and off-site storage plans to save them in the
case of a catastrophe.
After folders and files are restored, following checks can be done to assure that files are
recovered properly:
Rename the corrupted document folder
Count the files in the restored folders and match with it with an existing folder.
Open a few of the files and make sure they are accessible. Be sure to open them with
the application that normally uses them. And make sure you can browse the data,
update the data or whatever you normally do.
It is best to open several files of different types, pictures, mp3s, documents and some
large and some small.
Most operating systems have utilities that you can use to compare files and
directories.
Computer Crimes
Computer crime / Cybercrime is defined as an unlawful action against any person using a
computer, its systems, and its online or offline applications. It occurs when information
technology is used to commit or cover an offense. However, the act is only considered
Cybercrime if it is intentional and not accidental.
The crime that involves and uses computer devices and Internet, is known as Computer or
Cybercrime.
What is Cybercrime?
Cyber-crime is the use of computers and networks to perform illegal activities such as
spreading computer viruses, online bullying, performing unauthorized electronic fund
transfers, etc. Most cybercrimes are committed through the internet. Some cybercrimes can
also be carried out using Mobile phones via SMS and online chatting applications.
Examples of Cybercrime
The following list presents the common types of cybercrimes:
Computer Fraud: Intentional deception for personal gain via the use of computer
systems.
Privacy violation: Exposing personal information such as email addresses, phone
number, account details, etc. on social media, websites, etc.
Identity Theft: Stealing personal information from somebody and impersonating that
person.
Sharing copyrighted files/information: This involves distributing copyright protected
files such as eBooks and computer programs etc.
Electronic funds transfer: This involves gaining an un-authorized access to bank
computer networks and making illegal fund transfers.
Electronic money laundering: This involves the use of the computer to launder money.
ATM Fraud: This involves intercepting ATM card details such as account number and
PIN numbers. These details are then used to withdraw funds from the intercepted
accounts.
Denial of Service Attacks: This involves the use of computers in multiple locations to
attack servers with a view of shutting them down.
Spam: Sending unauthorized emails. These emails usually contain advertisements.
Types of Cybercrime
Let us now discuss the major types of cybercrime −
1) Hacking
It is an illegal practice by which a hacker breaches the computer’s security system of
someone for personal interest.
2) Unwarranted mass-surveillance
Mass surveillance means surveillance of a substantial fraction of a group of people by the
authority especially for the security purpose, but if someone does it for personal interest, it is
considered as cybercrime.
3) Child pornography
It is one of the most heinous crimes that is brazenly practiced across the world. Children are
sexually abused and videos are being made and uploaded on the Internet.
4) Child grooming
It is the practice of establishing an emotional connection with a child especially for the
purpose of child-trafficking and child prostitution.
5) Copyright infringement
If someone infringes someone’s protected copyright without permission and publishes that
with his own name, is known as copyright infringement.
6) Identity theft
Identity theft occurs when a cyber-criminal impersonates someone else identity to practice
malfunction.
7) Denial of Service Attack:
In this cyberattack, the cyber-criminal uses the bandwidth of the victim's network or fills their
e-mail box with spammy mail. Here, the intention is to disrupt their regular services.
8) Software Piracy:
Theft of software by illegally copying genuine programs or counterfeiting. It also includes the
distribution of products intended to pass for the original.
9) Phishing:
Phishing is a technique of extracting confidential information from the bank/financial
institutional account holders by illegal ways.
10) Spoofing:
It is an act of getting one computer system or a network to pretend to have the identity of
another computer. It is mostly used to get access to exclusive privileges enjoyed by that
network or computer.
11) Click fraud
Advertising companies such as Google AdSense offer pay per click advertising services.
Click fraud occurs when a person clicks such a link with no intention of knowing more about
the click but to make more money. This can also be accomplished by using automated
software that makes the clicks.
12) Advance Fee Fraud
An email is sent to the target victim that promises them a lot of money in favor of helping
them to claim their inheritance money.
13) Cyber-extortion
When a hacker hacks someone’s email server, or computer system and demands money to
reinstate the system, it is known as cyber-extortion.
14) Cyber-terrorism
Normally, when someone hacks government’s security system or intimidates government or
such a big organization to advance his political or social objectives by invading the security
system through computer networks, it is known as cyber-terrorism.
15) Computer virus
Viruses are unauthorized programs that can annoy users, steal sensitive data or be used to
control equipment that is controlled by computers.
Securing the Web
There are many ways to assure yourself, employees, and customers that your website is safe.
Website security does not have to be a guessing game. Take essential steps towards
improving your site’s security. Help keep data away from prying eyes.
No method can guarantee your site will forever be “hacker-free.” The use of preventative
methods will reduce your site’s vulnerability. Website security is both a simple and
complicated process.
Website Vulnerabilities & Threats
The most common website security vulnerabilities and threats are:
1. SQL Injections
SQL injection attacks are done by injecting malicious code in a vulnerable SQL query. They
rely on an attacker adding a specially crafted request within the message sent by the website
to the database. A successful attack will alter the database query in such a way that it will
return the information desired by the attacker, instead of the information the website
expected.
2. Cross-site Scripting (XSS)
Cross-site scripting attacks consist of injecting malicious client-side scripts into a website and
using the website as a propagation method.
The danger behind XSS is that it allows an attacker to inject content into a website and
modify how it is displayed, forcing a victim’s browser to execute the code provided by the
attacker when loading the page. If a logged in site administrator loads the code, the script will
be executed with their level of privilege, which could potentially lead to site takeover.
3. Credential Brute Force Attacks
Gaining access to a website’s admin area, control panel or even to the SFTP server is one of
the most common vectors used to compromise websites. The process is very simple; the
attackers basically program a script to try multiple combinations of usernames and passwords
until it finds one that works.
4. Website Malware Infections & Attacks
Using some of the previous security issues as a means to gain unauthorized access to a
website, attackers can then:
Inject SEO spam on the page
Drop a backdoor to maintain access
Collect visitor information or credit card data
Run exploits on the server to escalate access level
Use visitors’ computers to mine crypto currencies
Store botnets command & control scripts
Show unwanted ads, redirect visitors to scam sites
Host malicious downloads
Launch attacks against other sites
5. DoS/DDoS Attacks
A Distributed Denial of Service (DDoS) attack is a non-intrusive internet attack. It is made to
take down the targeted website or slow it down by flooding the network, server or application
with fake traffic.
DDoS attacks are threats that website owners must familiarize themselves with as they are a
critical piece of the security landscape. When a DDoS attack targets a vulnerable resource-
intensive endpoint, even a tiny amount of traffic is enough for the attack to be successful.
Website Security Framework
The US National Institute of Standards and Technology (NIST) developed The Cyber
security Framework which forms the basis of our website security principles framework.
Knowing security is a continuous process means it starting with the foundation of a website
security framework. This framework will involve creating a “culture of security” where
scheduled audits will help in keeping things simple and timely.
The five functions: Identify, Protect, Detect, Respond and Recover will be broken out in
more detail along with actions to be applied.
There are at least ten essential steps you can take to improve website safety.
How to Improve Your Websites Safety
1. Keep Software And Plugins Up-To-Date
Every day, there are countless websites compromised due to outdated software. Potential
hackers and bots are scanning sites to attack. Updates are vital to the health and security of
your website. If your site’s software or applications are not up-to-date, your site is not secure.
Take all software and plugin update requests seriously.
Updates often contain security enhancements and vulnerability repairs. Check your website
for updates or add an update notification plugin. Some platforms allow automatic updates,
which is another option to ensure website security.
2. Add HTTPS and an SSL Certificate
To keep your website safe, you need a secure URL. If your site visitors offer to send their
private information, you need HTTPS, not HTTP, to deliver it.
What is HTTPs?
HTTPS (Hypertext Transfer Protocol Secure) is a protocol used to provide security over the
Internet. HTTPS prevents interceptions and interruptions from occurring while the content is
in transit.
For you to create a secure online connection, your website also needs an SSL Certificate. If
your website asks visitors to register, sign-up, or make a transaction of any kind, you need to
encrypt your connection.
What is SSL?
SSL (Secure Sockets Layer) is another necessary site protocol. This transfers visitor’s
personal information between the website and your database. SSL encrypts information to
prevent it from others reading it while in transit.
It denies those without proper authority the ability to access the data, as well.
3. Choose a Smart Password
With there being so many websites, databases, and programs needing passwords, it is hard to
keep track. A lot of people end up using the same password in all places, to remember their
login information. But this is a significant security mistake.
Create a unique password for every new log in request. Come up with complicated, random,
and difficult to guess passwords. Then, store them outside the website directory.
Refrain from using any personal information inside your password as well. Do not use your
birthday or pet’s name; make it completely unguessable. After three months or sooner,
change password to another one, then repeat. Smart passwords are long and should be at least
twelve characters. Password needs to be a combination of numbers and symbols. Make sure
to alternate between uppercase and lowercase letters. Never use the same password twice or
share it with others.
If you are a business owner or CMS manager, ensure all employees change their passwords
frequently.
4. Use a Secure Web Host
Think of your website’s domain name as a street address. Now, think of the web host as the
plot of “real estate” where your website exists online.
As you would research a plot of land to build a house, you need to examine potential web
hosts to find the right one for you. Many hosts provide server security features that better
protect your uploaded website data. There are certain items to check for when choosing a
host.
Does the web host offer a Secure File Transfer Protocol (SFTP)?
Is FTP Use by Unknown User disabled?
Does it use a Rootkit Scanner?
Does it offer file backup services?
How well do they keep up to date on security upgrades?
Whether you choose SiteGround or WP Engine as your web host, make sure it has what you
need to keep your site secure.
5. Record User Access and Administrative Privileges
Initially, you may feel comfortable giving several high-level employees access to your
website. If they make a mistake or overlook an issue, this can result in a significant security
issue. It is vital to vet your employees before giving them website access. Find out if they
have experience using your CMS and if they know what to look for to avoid a security
breach. Educate every CMS user about the importance of passwords and software updates.
Tell them all the ways they can help maintain the website’s safety.
To keep track of who has access to your CMS and their administrative settings, make a
record and update it often. Employees come and go. One of the best ways to prevent security
issues is to have a physical record of who does what with your website.
6. Change Your CMS Default Settings
The most common attacks against websites are entirely automated. What many attack bots
rely on is for users to have their CMS settings on default. After choosing your CMS, change
your default settings immediately. Changes help prevent a large number of attacks from
occurring.
CMS settings can include adjusting control comments, user visibility, and permissions.
Customize users and their permission settings. Do not keep the default settings as is, or you
will run into website security issues at some point.
7. Backup Your Website
One of the best methods to keep your site safe is to have a good backup solution. You should
have more than one. Each is crucial to recovering your website after a major security incident
occurs.
There are several different solutions you can use to help recover damaged or lost files.
Keep your website information off-site. Do not store your backups on the same server as your
website; they are as vulnerable to attacks too.
Choose to keep your website backup on a home computer or hard drive. Find an off-site place
to store your data and to protect it from hardware failures, hacks, and viruses.
Another option is to back up your website in the cloud. It makes storing data easy and allows
access to information from anywhere. Be redundant in your backup process — backup your
backup. By doing this, you can recover files from any point before the hack or virus occurs.
8. Know Your Web Server Configuration Files
Get to know your web server configuration files. You can find them in the root web directory.
Web server configuration files permit you to administer server rules. This includes directives
to improve your website security.
9. Apply for a Web Application Firewall
Make sure you apply for a web application firewall (WAF). It sets between your website
server and the data connection. The purpose is to read every bit of data that passes through it
to protect your site.
10. Tighten Network Security
When you think your website is secure, you need to analyze your network security.
Employees who use office computers may inadvertently be creating an unsafe pathway to
your website.
To prevent them from giving access to your website’s server, consider doing the following at
your business:
o Have computer logins expire after a short period of inactivity.
o Make sure your system notifies users every three months of password changes.
o Ensure all devices plugged into the network are scanned for malware each
time they are attached.
Unit V - New I.T Initiatives
Syllabus: Introduction to Deep learning, Big data, Pervasive Computing, Cloud computing,
Advancements in AI, IoT, Block chain, Crypto currency, Quantum computing
Deep Learning is a subset of Machine Learning that uses mathematical functions to map the
input to the output. These functions can extract non-redundant information or patterns from the
data, which enables them to form a relationship between the input and the output.
Deep learning is extremely powerful when the dataset is large. It can learn any complex patterns
from the data and can draw accurate conclusions on its own. In fact, deep learning is so powerful
that it can even process unstructured data - data that is not adequately arranged like text corpus,
social media activity, etc. Furthermore, it can also generate new data samples and find anomalies
that machine learning algorithms and human eyes can miss.
In traditional computer programming, input and a set of rules are combined together to get the
desired output. In machine learning and deep learning, input and output are correlated to the
rules.
These rules—when combined with new input—yield desired results.
Works on small amount of Dataset for accuracy. Works on Large amount of Dataset.
Neural Networks
Modern deep learning models use artificial neural networks or simply neural networks to extract
information. These neural networks are made up of a simple mathematical function that can be
stacked on top of each other and arranged in the form of layers, giving them a sense of
depth, hence the term Deep Learning.
The neural network is the heart of deep learning models, and it was initially designed to mimic
the working of the neurons in the human brain.
In essence, neural networks enable us to learn the structure of the data or information and help
us to understand it by performing tasks such as clustering, classification, regression, or sample
generation.
Deep learning is implemented with the help of Neural Networks, and the idea behind the
motivation of Neural Network is the biological neurons, which is nothing but a brain cell.
Deep learning is a collection of statistical techniques of machine learning for learning feature
hierarchies that are actually based on artificial neural networks.
So basically, deep learning is implemented by the help of deep networks, which are nothing
but neural networks with multiple hidden layers.
In the example given above, we provide the raw data of images to the first layer of the input
layer. After then, these input layer will determine the patterns of local contrast that means it
will differentiate on the basis of colors, luminosity, etc. Then the 1st hidden layer will
determine the face feature, i.e., it will fixate on eyes, nose, and lips, etc. And then, it will
fixate those face features on the correct face template. So, in the 2 nd hidden layer, it will
actually determine the correct face here as it can be seen in the above image, after which it
will be sent to the output layer. Likewise, more hidden layers can be added to solve more
complex problems, So, as and when the hidden layers increase, we are able to solve complex
problems.
Architectures:
1. Deep Neural Network – It is a neural network with a certain level of complexity
(having multiple hidden layers in between input and output layers). They are
capable of modeling and processing non-linear relationships.
2. Deep Belief Network (DBN) – It is a class of Deep Neural Network. It is multi-
layer belief networks.
Steps for performing DBN:
a. Learn a layer of features from visible units using Contrastive Divergence
algorithm.
b. Treat activations of previously trained features as visible units and then learn
features of features.
c. Finally, the whole DBN is trained when the learning for the final hidden layer is
achieved.
3. Recurrent (perform same task for every element of a sequence) Neural Network –
Allows for parallel and sequential computation. Similar to the human brain (large
feedback network of connected neurons). They are able to remember important
things about the input they received and hence enables them to be more precise.
Types of Deep Learning Networks
5. Autoencoders
An autoencoder neural network is another kind of unsupervised machine learning algorithm.
Here the number of hidden cells is merely small than that of the input cells. But the number
of input cells is equivalent to the number of output cells. An autoencoder network is trained
to display the output similar to the fed input to force AEs to find common patterns and
generalize the data. The autoencoders are mainly used for the smaller representation of the
input. It helps in the reconstruction of the original data from compressed data. This algorithm
is comparatively simple as it only necessitates the output identical to the input.
o Encoder: Convert input data in lower dimensions.
o Decoder: Reconstruct the compressed data.
Applications:
o Classification.
o Clustering.
o Feature Compression.
6. Transformers
Transformers are the new class deep learning model that is used mostly for the tasks related to
modeling sequential data, like that in NLP. It is much more powerful than RNNs and they are
replacing them in every task.
How to Work Deep Learning:
Data availability
Deep learning models require a lot of data to learn the representation, structure, distribution, and
pattern of the data. If there isn't enough varied data available, then the model will not learn well
and will lack generalization (it won't perform well on unseen data).
The complexity of the model
Designing a deep learning model is often a trial and error process.
A simple model is most likely to underfit, i.e. not able to extract information from the training
set, and a very complex model is most likely to overfit, i.e., not able to generalize well on the test
dataset. Deep learning models will perform well when their complexity is appropriate to the
complexity of the data.
Lacks global generalization
A simple neural network can have thousands to tens of thousands of parameters.
The idea of global generalization is that all the parameters in the model should cohesively
update themselves to reduce the generalization error or test error as much as possible.
However, because of the complexity of the model, it is very difficult to achieve zero
generalization error on the test set.
Incapable of Multitasking
Deep neural networks are incapable of multitasking.
These models can only perform targeted tasks, i.e., process data on which they are trained. For
instance, a model trained on classifying cats and dogs will not classify men and women.
Furthermore, applications that require reasoning or general intelligence are completely beyond
what the current generation’s deep learning techniques can do, even with large sets of data.
Hardware dependence
These models are so complex that a normal CPU will not be able to withstand the computational
complexity. However, multicore high-performing graphics processing units (GPUs) and tensor
processing units (TPUs) are required to effectively train these models in a shorter time.
Big Data
What is Big Data?
Data which are very large in size is called Big Data. Normally we work on data of size
MB(WordDoc ,Excel) or maximum GB(Movies, Codes) but data in Peta bytes i.e. 10^15 byte
size is called Big Data. It is stated that almost 90% of today's data has been generated in the
past 3 years.
Big data is the valuable and powerful fuel that drives large IT industries of the 21st century.
Big data is a spreading technology used in each business sector .
Big Data contains a large amount of data that is not being processed by traditional data
storage or the processing unit. It is used by many multinational companies to process the
data and business of many organizations. The data flow would exceed 150 exabytes per day
before replication.
There are five v's of Big Data that explains the characteristics. 5 V's of Big Data
Volume
The name Big Data itself is related to an enormous size. Big Data is a vast 'volumes' of data
generated from many sources daily, such as business processes, machines, social media
platforms, networks, human interactions, and many more.
Facebook can generate approximately a billion messages, 4.5 billion times that the "Like"
button is recorded, and more than 350 million new posts are uploaded each day. Big data
technologies can handle large amounts of data.
Variety
Big Data can be structured, unstructured, and semi-structured that are being collected from
different sources. Data will only be collected from databases and sheets in the past, But these
days the data will comes in array forms, that are PDFs, Emails, audios, SM posts, photos,
videos, etc.
The term Big Data is referred to as large amount of complex and unprocessed data. Now a day's
companies use Big Data to make business more informative and allows to take business
decisions by enabling data scientists, analytical modelers and other professionals to analyse large
volume of transactional data.
Travel and Tourism
Travel and tourism are the users of Big Data. It enables us to forecast travel facilities
requirements at multiple locations, improve business through dynamic pricing, and many more.
Financial and banking sector
The financial and banking sectors use big data technology extensively. Big data analytics
help banks and customer behaviour on the basis of investment patterns, shopping trends,
motivation to invest, and inputs that are obtained from personal or financial backgrounds.
Healthcare
Big data has started making a massive difference in the healthcare sector, with the help
of predictive analytics, medical professionals, and health care personnel. It can
produce personalized healthcare and solo patients also.
Telecommunication and media
Telecommunications and the multimedia sector are the main users of Big Data. There
are zettabytes to be generated every day and handling large-scale data that require big data
technologies.
Government and Military
The government and military also used technology at high rates. We see the figures that
the government makes on the record. In the military, a fighter plane requires to
process petabytes of data.
Government agencies use Big Data and run many agencies, managing utilities, dealing with
traffic jams, and the effect of crime like hacking and online fraud.
Aadhar Card: The government has a record of 1.21 billion citizens. This vast data is analyzed
and store to find things like the number of youth in the country
E-commerce
E-commerce is also an application of Big data. It maintains relationships with customers that is
essential for the e-commerce industry. E-commerce websites have many marketing ideas to retail
merchandise customers, manage transactions, and implement better strategies of innovative ideas
to improve businesses with Big data.
o Amazon: Amazon is a tremendous e-commerce website dealing with lots of traffic daily.
But, when there is a pre-announced sale on Amazon, traffic increase rapidly that may
crash the website. So, to handle this type of traffic and data, it uses Big Data. Big Data
help in organizing and analyzing the data for far use.
Social Media
Social Media is the largest data generator. The statistics have shown that around 500+ terabytes
of fresh data generated from social media daily, particularly on Facebook. The data mainly
contains videos, photos, message exchanges, etc. A single activity on the social media site
generates many stored data and gets processed when required. The data stored is in terabytes
(TB); it takes a lot of time for processing. Big Data is a solution to the problem.
How Big Data Works
Big data gives you new insights that open up new opportunities and business models. Getting
started involves three key actions:
1. Integrate
Big data brings together data from many disparate sources and applications. Traditional data
integration mechanisms, such as extract, transform, and load (ETL) generally aren’t up to the
task. It requires new strategies and technologies to analyze big data sets at terabyte, or even
petabyte, scale.
During integration, you need to bring in the data, process it, and make sure it’s formatted and
available in a form that your business analysts can get started with.
2. Manage
Big data requires storage. Your storage solution can be in the cloud, on premises, or both. You
can store your data in any form you want and bring your desired processing requirements and
necessary process engines to those data sets on an on-demand basis. Many people choose their
storage solution according to where their data is currently residing. The cloud is gradually
gaining popularity because it supports your current compute requirements and enables you to
spin up resources as needed.
3. Analyze
Your investment in big data pays off when you analyze and act on your data. Get new clarity
with a visual analysis of your varied data sets. Explore the data further to make new discoveries.
Share your findings with others. Build data models with machine learning and artificial
intelligence. Put your data to work.
Before businesses can put big data to work for them, they should consider how it flows among a
multitude of locations, sources, systems, owners and users. There are five key steps to taking
charge of this "big data fabric" that includes traditional, structured data along with unstructured
and semi-structured data:
Hadoop is an open source framework from Apache and is used to store process and analyze data
which are very huge in volume. Hadoop is written in Java and is not OLAP (online analytical
processing). It is used for batch/offline processing.It is being used by Facebook, Yahoo, Google,
Twitter, LinkedIn and many more.
Modules of Hadoop
1. HDFS: Hadoop Distributed File System. Google published its paper GFS and on the
basis of that HDFS was developed. It states that the files will be broken into blocks and
stored in nodes over the distributed architecture.
2. Yarn: Yet another Resource Negotiator is used for job scheduling and manage the
cluster.
3. Map Reduce: This is a framework which helps Java programs to do the parallel
computation on data using key value pair. The Map task takes input data and converts it
into a data set which can be computed in Key value pair. The output of Map task is
consumed by reduce task and then the out of reducer gives the desired result.
4. Hadoop Common: These Java libraries are used to start Hadoop and are used by other
Hadoop modules.
Pervasive Computing
Pervasive Computing is also called as Ubiquitous computing, and it is the new trend toward
embedding everyday objects with microprocessors so that they can communicate information. It
refers to the presence of computers in common objects found all around us so that people are
unaware of their presence. All these devices communicate with each other over wireless
networks without the interaction of the user.
It is also known as ubiquitous computing. The terms ubiquitous and pervasive signify "existing
everywhere." Pervasive computing systems are totally connected and consistently available.
Pervasive computing, is the growing trend of embedding computational capability (generally in
the form of microprocessors) into everyday objects to make them effectively communicate and
perform useful tasks in a way that minimizes the end user's need to interact with computers as
computers. Pervasive computing devices are network-connected and constantly available.
Unlike desktop computing, pervasive computing can occur with any device, at any time, in any
place and in any data format across any network and can hand tasks from one computer to
another.
Pervasive computing devices have evolved to include:
laptops;
notebooks;
smartphones;
tablets;
wearable devices and
sensors (for example, on fleet management, lighting systems, appliances).
How ubiquitous computing is used?
Pervasive computing applications have been designed for consumer use and to help people do
their jobs.
An example of pervasive computing is an Apple Watch that alerts the user to a phone call and
allows the call to be completed through the watch. Another example is when a registered user for
Audible, Amazon's audio book server, starts his or her book using the Audible app on a
smartphone on the train and continues listening to the book through Amazon Echo at home.
An environment in which devices, present everywhere, are capable of some form of computing
can be considered a ubiquitous computing environment.
Pervasive computing is a combination of three technologies, namely:
1. Micro electronic technology:
This technology gives small powerful device and display with low energy consumption.
2. Digital communication technology:
This technology provides higher bandwidth, higher data transfer rate at lower costs and
with world wide roaming.
3. The Internet standardization:
This standardization is done through various standardization bodies and industry to give
the framework for combining all components into an interoperable system with
security, service and billing systems.
Thus, wireless communication, consumer electronics and computer technology were all merged
into one to create a new environment called pervasive computing environment. It helps to access
information and render modern administration in areas that do not have a traditional wire-based
computing environment.
Pervasive computing is the next dimension of personal computing in the near future, and it will
definitely change and improve our work environment and communication methods.
Key Characteristics of Pervasive computing:
1. Many devices can be integrated into one system for multi-purpose uses.
2. A huge number of various interfaces can be used to build an optimized user interface.
3. Concurrent operation of online and offline supported.
4. A large number of specialized computers are integrated through local buses and Internet.
5. Security elements are added to prevent misuse and unauthorized access.
6. Personalization of functions adapts the systems to the user’s preferences, so that no PC
knowledge is required of the user to use and manage the system.
Examples
Examples of pervasive computing include electronic toll systems on highways; tracking
applications, such as Life360, which can track the location of the user, the speed at which they
are driving and how much battery life their smartphone has; Apple Watch; Amazon Echo; smart
traffic lights; and Fitbit.
Applications:
There are a rising number of pervasive devices available in the market nowadays. The areas of
application of these devices include:
Retail Tracking
Airlines booking and check-in Car information System
Sales force automation Email access via WAP (Wireless
Healthcare Application Protocol) and voice.
For example, in retail industry, there is a requirement for faster and cheaper methods to bring
goods to the consumer from stores via Internet. Mobile computers are provided with bar code
readers for tracking the product during manufacture. Currently consumers use computers to
select products. In future, they will use PDA (Personal Digital Assistant) and pervasive devices
in the domestic markets too. When they complete writing the list of items to be bought on these
devices, this list can then be sent to the supermarket, and purchase can be delivered to the
consumer. The advantages of this are faster processing of data and execution of data mining.
Importance
Pervasive computing systems are capable of collecting, processing and communicating data, they
can adapt to the data's context and activity. That means a network that can understand its
surroundings and improve the human experience and quality of life.
Advantages of pervasive computing
As described above, pervasive computing requires less human interaction than a ubiquitous
computing environment where there may be more connected devices, but that the extraction and
processing of data requires more intervention.
Because pervasive computing systems are capable of collecting, processing and communicating
data, they can adapt to the data's context and activity. That means, in essence, that a network that
can understand its surroundings and improve the human experience and quality of life.
Disadvantages of pervasive computing
A distinct problem with pervasive computing is that it is not entirely secure. The devices and
technologies used in pervasive computing do not lend themselves well to typical data security.
Pervasive computing include frequent line connections that are broken, slow connections, very
expensive operating costs, host bandwidths that are limited in nature and location-dependent
data.
All of these instances can impede the security of pervasive computing because they result in
multiple system vulnerabilities.
Cloud Computing
What is Cloud Computing?
Cloud Computing can be defined as
delivering computing power (CPU, RAM,
Network Speeds, Storage OS software) a
service over a network (usually on the
internet) rather than physically having the
computing resources at the customer
location.
1. Private Cloud: Here, computing resources are deployed for one particular organization.
This method is more used for intra-business interactions. Where the computing resources
can be governed, owned and operated by the same organization.
2. Community Cloud: Here, computing resources are provided for a community and
organizations.
3. Public Cloud: This type of cloud is used usually for B2C (Business to Consumer) type
interactions. Here the computing resource is owned, governed and operated by
government, an academic or business organization.
4. Hybrid Cloud: This type of cloud can be used for both type of interactions - B2B
(Business to Business) or B2C ( Business to Consumer). This deployment method is
called hybrid cloud as the computing resources are bound together by different clouds.
Cloud Computing Services
The three major Cloud Computing Offerings are
While back end refers to the cloud itself, it comprises of the resources that are required for
cloud computing services. It consists of virtual machines, servers, data storage, security
mechanism etc. It is under providers control.
Cloud computing distributes the file system that spreads over multiple hard disks and
machines. Data is never stored in one place only and in case one unit fails the other will take
over automatically. The user disk space is allocated on the distributed file system, while
another important component is algorithm for resource allocation. Cloud computing is a
strong distributed environment and it heavily depends upon strong algorithm.
Security concerns for Cloud Computing
While using cloud computing, the major issue that concerns the users is about its security.
One concern is that cloud providers themselves may have access to customer’s unencrypted
data- whether it’s on disk, in memory or transmitted over the network.
To provide security for systems, networks and data cloud computing service providers have
joined hands with TCG (Trusted Computing Group) which is non-profit organization which
regularly releases a set of specifications to secure hardware, create self-encrypting drives and
improve network security. It protects the data from root kits and malware.
As computing has expanded to different devices like hard disk drives and mobile phones,
TCG has extended the security measures to include these devices. It provides ability to create
a unified data protection policy across all clouds.
Some of the trusted cloud services are Amazon, Box.net, Gmail and many others.
Privacy Concern & Cloud Computing
Privacy present a strong barrier for users to adapt into Cloud Computing systems
There are certain measures which can improve privacy in cloud computing.
1. The administrative staff of the cloud computing service could theoretically monitor
the data moving in memory before it is stored in disk. To keep the confidentiality of a
data, administrative and legal controls should prevent this from happening.
2. The other way for increasing the privacy is to keep the data encrypted at the cloud
storage site, preventing unauthorized access through the internet; even cloud vendor
can’t access the data either.
Artificial Intelligence
Artificial Intelligence (AI), or machine intelligence, is the field developing computers and
robots capable of parsing data contextually to provide requested information, supply analysis,
or trigger events based on findings. Through techniques like machine learning and neural
networks, companies globally are investing in teaching machines to ‘think’ more like
humans.
What is Artificial Intelligence (AI)?
Artificial Intelligence, or simply AI, is the term used to describe a machine’s ability to
simulate human intelligence. Actions like learning, logic, reasoning, perception, creativity,
that were once considered unique to humans, is now being replicated by technology and used
in every industry.
A common example of AI in today’s world is chatbots, specifically the “live chat” versions
that handle basic customer service requests on company websites.
How does AI work?
Artificial Intelligence is a complex field with many components and methodologies used to
achieve the final result — an intelligent machine. AI was developed by studying the way the
human brain thinks, learns and decides, then applying those biological mechanisms to
computers.
As opposed to classical computing, where coders provide the exact inputs, outputs, and logic,
artificial intelligence is based on providing a machine the inputs and a desired outcome,
letting the machine develop its own path to achieve its set goal. This frequently allows
computers to better optimize a situation than humans, such as optimizing supply chain
logistics and streamlining financial processes.
Types of AI
There are four types of AI that differ in their complexity of abilities:
1. Reactive machines use previous data to draw conclusions for current decisions, such
as a chess AI.
2. Limited memory uses live data to read a situation and make decisions, such as self-
driving cars.
3. Theory of mind is something we haven’t reached yet, and deals with AI understanding
that every entity has its own set of underlying concepts — such as motives, intentions
and emotions.
4. Self-awareness is the final stage of AI, where an AI can not only understand others’
consciousness (theory of mind) but also has a concept of its own existence.
Examples of AI
Artificial intelligence is used in virtually all businesses; in fact, you likely interact with it in
some capacity on a daily basis. Chatbots, smart cars, IoT devices, healthcare, banking,
and logistics all use artificial intelligence to provide a superior experience.
One AI that is quickly finding its way into most consumer’s homes is the voice assistant,
such as Apple’s Siri, Amazon’s Alexa, Google’s Assistant, and Microsoft’s Cortana.
Once simply considered part of a smart speaker, AI-equipped voice assistants are now
powerful tools deeply integrated across entire ecosystems of channels and devices to provide
an almost human-like virtual assistant experience.
Benefits of AI
Artificial intelligence can help reduce human error, create more precise analytics, and turn
data collecting devices into powerful diagnostic tools.
One example of this is wearable devices such as smartwatches and fitness trackers, which put
data in the hands of consumers to empower them to play a more active role managing their
health.
Advancements in AI
In the last five years, the field of AI has made major progress in almost all its standard sub-areas,
including vision, speech recognition and generation, natural language processing (understanding
and generation), image and video generation, multi-agent systems, planning, decision-making, and
integration of vision and motor control for robotics.
In addition, breakthrough applications emerged in a variety of domains including games, medical
diagnosis, logistics systems, autonomous driving, language translation, and interactive personal
assistance.
Language Processing
Language processing technology made a major leap in the last five years, leading to the
development of network architectures with enhanced capability to learn from complex and
context-sensitive data. These models’ facility with language is already supporting
applications such as machine translation, text classification, speech recognition, writing aids,
and chatbots. Future applications could include improving human-AI interactions across
diverse languages and situations.
Computer Vision and Image Processing
Image-processing technology is now widespread, finding uses ranging from video-conference
backgrounds to the photo-realistic images known as deepfakes. Many image-processing
approaches use deep learning for recognition, classification, conversion, and other tasks.
Training time for image processing has been substantially reduced.
Real-time object-detection systems such as YOLO (You Only Look Once) that notice
important objects when they appear in an image are widely used for video surveillance of
crowds and are important for mobile robots including self-driving cars. Face-recognition
technology has also improved significantly over the last five years, and now some
smartphones and even office buildings rely on it to control access
Games
Developing algorithms for games and simulations in adversarial situations has long been a
fertile training ground and a showcase for the advancement of AI techniques.
Robotics
The last five years have seen consistent progress in intelligent robotics driven by machine
learning, powerful computing and communication capabilities, and increased availability of
sophisticated sensor systems.
Mobility
Autonomous vehicles or self-driving cars have been one of the hottest areas in deployed
robotics, as they impact the entire automobile industry as well as city planning. The design of
self-driving cars requires integration of a range of technologies including sensor fusion, AI
planning and decision-making, vehicle dynamics prediction, on-the-fly rerouting, inter-
vehicle communication, and more.
Driver assist systems are increasingly widespread in production vehicles. These systems use
sensors and AI-based analysis to carry out tasks such as adaptive cruise control to safely
adjust speed, and lane-keeping assistance to keep vehicles centered on the road.
Healthcare:
AI is increasingly being used in biomedical applications, particularly in diagnosis, drug
discovery, and basic life science research. Diseases are more quickly and accurately
diagnosed, drug discovery is sped up and streamlined, virtual nursing assistants monitor
patients and big data analysis helps to create a more personalized patient experience. Tools
now exist for identifying a variety of eye and skin disorders, detecting cancers and supporting
measurements needed for clinical diagnosis.
Finance
AI has been increasingly adopted into finance. Deep learning models now partially automate
lending decisions for several lenders and have transformed payments with credit scoring.
These new systems often take advantage of consumer data that are not traditionally used in
credit scoring.
In the space of personal finance, so-called robo-advising—automated financial advice—is
quickly becoming mainstream for investment and overall financial planning. For financial
institutions, uses of AI are going beyond detecting fraud and enhancing cybersecurity to
automating legal and compliance documentation as well as detecting money laundering.
Recommender Systems
With the explosion of information available to us, recommender systems that automatically
prioritize what we see when we are online have become absolutely essential. Such systems
have always drawn heavily on AI, and now they have a dramatic influence on people’s
consumption of products, services, and content—from news, to music, to videos, and more
Transportation: Although it could take a decade or more to perfect them, autonomous cars
will one day ferry us from place to place
Manufacturing: AI powered robots work alongside humans to perform a limited range of
tasks like assembly and stacking, and predictive analysis sensors keep equipment running
smoothly
Education: Textbooks are digitized with the help of AI, early-stage virtual tutors assist
human instructors and facial analysis gauges the emotions of students to help determine
who’s struggling or bored and better tailor the experience to their individual needs
Media: Journalism is harnessing AI, too, and will continue to benefit from it. Bloomberg
uses Cyborg technology to help make quick sense of complex financial reports. The
Associated Press employs the natural language abilities of Automated Insights to produce
3,700 earning reports stories per year — nearly four times more than in the recent past.
Customer Service: Last but hardly least, Google is working on an AI assistant that can place
human-like calls to make appointments at, say, your neighborhood hair salon. In addition to
words, the system understands context and nuance.
IoT (Internet of Things)
IoT stands for Internet of Things, which means accessing and controlling daily usable
equipments and devices using Internet. Internet of Things (IoT) is how we describe the
digitally connected universe of everyday physical devices. These devices are embedded with
internet connectivity, sensors and other hardware that allow communication and control via
the web.
Definition: Connecting everyday things embedded with electronics, software, and sensors to
internet enabling to collect and exchange data without human interaction called as the
Internet of Things (IoT).
The term "Things" in the Internet of Things refers to anything and everything in day to day
life which is accessed or connected through the internet.
IoT is an advanced automation and analytics system which deals with artificial intelligence,
sensor, networking, electronic, cloud messaging etc. to deliver complete systems for the
product or services. The system created by IoT has greater transparency, control and
performance.
For example, a house, where we can connect our home appliances such as air conditioner,
light, etc. through each other and all these things are managed at the same platform. Since we
have a platform, we can connect our car, track its fuel meter, speed level, and also track the
location of the car.
If there is a common platform where all these things can connect to each other would be great
because based on my preference, I can set the room temperature. For example, if I love the
room temperature to to be set at 25 or 26-degree Celsius when I reach back home from my
office, then according to my car location, my AC would start before 10 minutes I arrive at
home. This can be done through the Internet of Things (IoT).
IoT makes once "dumb" devices "smarter" by giving them the ability to send data over the
internet, allowing the device to communicate with people and other IoT-enabled things.
The connected "smart home" is a good example of IoT in action. Internet-enabled
thermostats, doorbells, smoke detectors and security alarms create a connected hub where
data is shared between physical devices and users can remotely control the "things" in that
hub (i.e., adjusting temperature settings, unlocking doors, etc.) via a mobile app or website.
How does Internet of Thing (IoT) Work?
IoT works in the following way:
Devices have hardware like sensors, for example, that collect data.
The data collected by the sensors is then shared via the cloud and integrated with
software.
The software then analyzes and transmits the data to users via an app or website.
The working of IoT is different for different IoT echo system (architecture). However, the
key concept of their working are similar. The entire working process of IoT starts with the
device themselves, such as smartphones, digital watches, electronic appliances, which
securely communicate with the IoT platform. The platforms collect and analyze the data from
all multiple devices and platforms and transfer the most valuable data with applications to
devices.
Features of IoT
The most important features of IoT on which it works are connectivity, analyzing,
integrating, active engagement, and many more. Some of them are listed below:
Connectivity: Connectivity refers to establish a proper connection between all the things of
IoT to IoT platform it may be server or cloud. After connecting the IoT devices, it needs a
high speed messaging between the devices and cloud to enable reliable, secure and bi-
directional communication.
Analyzing: After connecting all the relevant things, it comes to real-time analyzing the data
collected and use them to build effective business intelligence. If we have a good insight into
data gathered from all these things, then we call our system has a smart system.
Integrating: IoT integrating the various models to improve the user experience as well.
Artificial Intelligence: IoT makes things smart and enhances life through the use of data. For
example, if we have a coffee machine whose beans have going to end, then the coffee
machine itself order the coffee beans of your choice from the retailer.
Sensing: The sensor devices used in IoT technologies detect and measure any change in the
environment and report on their status. IoT technology brings passive networks to active
networks. Without sensors, there could not hold an effective or true IoT environment.
Active Engagement: IoT makes the connected technology, product, or services to active
engagement between each other.
Endpoint Management: It is important to be the endpoint management of all the IoT system
otherwise, it makes the complete failure of the system. For example, if a coffee machine itself
order the coffee beans when it goes to end but what happens when it orders the beans from a
retailer and we are not present at home for a few days, it leads to the failure of the IoT
system. So, there must be a need for endpoint management.
IoT - Platform
As in IoT, all the IoT devices are connected to other IoT devices and application to transmit
and receive information using protocols. There is a gap between the IoT device and IoT
application. An IoT Platform fills the gap between the devices (sensors) and application
(network). Thus we can say that an IoT platform is an integrated service that fulfills the
gap between the IoT device and application and offers you to bring physical object
online.
IoT Architecture
There is not such a unique or standard consensus on the Internet of Things (IoT) architecture
which is universally defined. The IoT architecture differs from their functional area and their
solutions. However, the IoT architecture technology mainly consists of four major
components:
Components of IoT Architecture
o Sensors/Devices o Cloud/Management Service Layer
o Gateways and Networks o Application Layer
Stages of IoT Solutions Architecture
The IoT architecture is a fundamental way to design the various elements of IoT, so that it
can deliver services over the networks and serve the needs for the future.
Following are the primary stages (layers) of IoT that provides the solution for IoT
architecture.
1. Sensors/Actuators: Sensors or Actuators are the devices that are able to emit, accept and
process data over the network. These sensors or actuators may be connected either through
wired or wireless. This contains GPS, Electrochemical, Gyroscope, RFID, etc. Most of the
sensors need connectivity through sensors gateways. The connection of sensors or actuators
can be through a Local Area Network (LAN) or Personal Area Network.
2. Gateways and Data Acquisition: As the large numbers of data are produced by this sensors
and actuators need the high-speed Gateways and Networks to transfer the data. This network
can be of type Local Area Network (LAN such as WiFi, Ethernet, etc.), Wide Area Network
(WAN such as GSM, 5G, etc.).
3. Edge IT: Edge in the IoT Architecture is the hardware and software gateways that analyze
and pre-process the data before transferring it to the cloud. If the data read from the sensors
and gateways are not changed from its previous reading value then it does not transfer over
the cloud, this saves the data used.
4. Data center/ Cloud: The Data Center or Cloud comes under the Management Services which
process the information through analytics, management of device and security controls.
Beside this security controls and device management the cloud transfer the data to the end
users application such as Retail, Healthcare, Emergency, Environment, and Energy, etc.
IoT Devices
Internet-of-Things devices are all around us; constantly transmitting data and “talking” with
other IoT devices. We come across IoT devices almost everyday in the form of virtual
assistants, “smart” electronics and in our wearable health trackers. Each Internet-of-Things
device tracks information in real-time and relays it to us to help make our lives safer,
healthier or more efficient.
IoT devices are enlarging the internet connectivity beyond standard devices such as
smartphones, laptops, tablets, and desktops. Embedding these devices with technology enable
us to communicate and interact over the networks and they can be remotely monitored and
controlled.
IoT devices include computer devices, software, wireless sensors, and actuators. These IoT
devices are connected over the internet and enabling the data transfer among objects or
people automatically without human intervention.
Some of the common and popular IoT devices are given below:
Properties of IoT Devices
Some of the essential properties of IoT devices are mention below:
o Sense: The devices that sense its surrounding environment in the form of temperature,
movement, and appearance of things, etc.
o Send and receive data: IoT devices are able to send and receive the data over the
network connection.
o Analyze: The devices can able to analyze the data that received from the other device
over the internet networks.
o Controlled: IoT devices may control from some endpoint also. Otherwise, the IoT
devices are themselves communicate with each other endlessly leads to the system
failure.
Advantages of IoT
Internet of things facilitates the several advantages in day-to-day life in the business sector.
Some of its benefits are given below:
o Efficient resource utilization: If we know the functionality and the way that how each
device work we definitely increase the efficient resource utilization as well as monitor natural
resources.
o Minimize human effort: As the devices of IoT interact and communicate with each other
and do lot of task for us, then they minimize the human effort.
o Save time: As it reduces the human effort then it definitely saves out time. Time is the
primary factor which can save through IoT platform.
o Enhance Data Collection:
o Improve security: Now, if we have a system that all these things are interconnected then we
can make the system more secure and efficient.
Disadvantages of IoT
As the Internet of things facilitates a set of benefits, it also creates a significant set of
challenges. Some of the IoT challenges are given below:
o Security: As the IoT systems are interconnected and communicate over networks. The
system offers little control despite any security measures, and it can be lead the various kinds
of network attacks.
o Privacy: Even without the active participation on the user, the IoT system provides
substantial personal data in maximum detail.
o Complexity: The designing, developing, and maintaining and enabling the large technology
to IoT system is quite complicated.
Advancements and Applications in the IoT
The IoT, along with artificial intelligence, machine learning and cloud technology, has been
one of the most important trends in high-tech over the past couple years. It has been
developing at astonishing speeds since its inception, often rapidly changing direction and
popping up in new and quite unexpected forms.
The rise of 5G technology - 5G networks are at the forefront of development of cellular
mobile communications. Their extremely high speeds will offer an array of new possibilities
for the IoT, paving the way for a degree of connectivity that is impossible with current
standards. Through 5G, data can be gathered, analyzed and managed in real time.
Edge Computing - Edge computing is literally the opposite of cloud computing. Edge
computing means that data is stored in micro-centers as opposed to the cloud,
providing numerous new options for the IoT. By storing data locally, it offers a cheaper,
faster, and more efficient approach to data processing.
Smart Stores - In 2019, smart lighting devices, video feeds and Wi-Fi enabled food
traffic-monitoring software allows store owners to collect info about customer traffic
patterns in the shop, how much time they spend in each of its aisles and how they
interact with products on display. After analyzing this data, retailers are able to change
the way they lay out their merchandise and decide how much of it they put on display or
even change their entire store layouts to enhance them in line with knowledge they
have gained about customer behavior.
Smart Cities - Using the IoT to track the condition of its traffic lights. If one malfunctions,
the system notifies utility companies so that they can quickly send a technician to solve
the problem.
Besides traffic lights, the IoT is being put to use in creating smart common areas in cities
around the world. Sidewalk Labs, owned by Alphabet, the parent of Google, is building a
smart neighborhood in Toronto. Smart sensors are being installed around the neighborhood
that will record everything from shared car use, building occupancy, sewage flow, and
optimum temperature choices around the clock. The goal is to make the neighborhood as
safe, convenient, and comfortable as possible for those who live there. Once this model is
perfected, it could set the standard for other smart neighborhoods and even eventually entire
smart cities.
Manufacturing and Healthcare - The IoT has already begun to transform manufacturing.
Sensors, RFID tags, and smart beacons have been in place for several years. Factory
owners are becoming able to prevent delays, improve production output, reduce
equipment downtime, and of course manage inventory.
In the world of healthcare, already more than half of organizations have adopted IoT
technology. It is an area where there are almost endless possibilities- smart pills, smart home
care, electronic health records, and personal healthcare management.
Connected Smart Cars – This is in the form of diagnostic information about the car.
Everything from tire pressure, oil level, fuel consumption , and when something goes wrong
with the engine is now available to be sent to the palm of your hand via a Wi-Fi connection to
your smartphone.
we can also see more IoT advancement like Connected apps, voice search, and current traffic
information in our cars . While self-driving cars are utilize IoT technology in the multitude
of sensors they contain that allows them to be monitored remotely as they navigate streets.
Increased Security Concerns - In the past, a malware infection meant just lost or
compromised data. The emergence of the IoT means that a virus or ransomware infection can
easily disable vital functions and services, and security vendors are starting to offer endpoint
security solutions to their existing services to prevent data loss and give suggestions about
threat protection and network health.
Healthcare and IoT the perfect combination - Sensors and wearable devices can collect
and monitor the data and instantaneously send it for processing. Health monitors transfer
analytics by the minute which can help doctors across the world analyze and deal with ‘the’
problems.
The general population is becoming more tech savvy which is a necessity for these adoptions.
Mobile applications as well as virtual assistants are becoming more common and widely used
in everyday routines. We even have smart cars coming up which can store and transmit your
medical information on the go. The possibilities are endless.
IoT cloud development - Cloud services have become absolutely essential for IoT
deployment. Because of this, major companies like Amazon are putting in rigorous
efforts to add more offerings to their arsenal catering cloud services.
A couple of years back, Microsoft launched its “Azure IoT Edge” which now allows various
devices in an environment to run cloud services without being actually connected to a cloud.
With cloud services, security features always become a priority thus tech giants are putting in
increased efforts to improve the security layers and locks surrounding these cloud data
storage features.
The prominence of big data and AI - It is a given that with the huge amounts of data
being collected, its processing is a must. Analyzing all this data is important as raw data
in itself cannot be of much use. Thus Big Data management is becoming essential and
growing exponentially with these advancements in IoT. The application of all this data
in real life situations holds great potential.
AI technologies are being employed aggressively by various companies in response to the
rising competition. A strong relationship is being formed therefore between AI, Big Data and
IoT devices and how these entities operate. AI helps in handling and sorting through the wide
aspects of Big Data and marks considerable growth in its proper use.
Introduction of IoT operating systems - We have all been using Windows and iOS for
some time now. But with the way IoT is developing, these operating systems are
becoming insufficient. Real-time responses are required for IoT to function is a timely
manner and many of our conventional operating systems lack that capability.
IoT technologies are also introducing newer and advanced chipsets which might not be
compatible with the current operating systems. Keeping these factors in mind, IoT facilitating
operating systems are being introduced which are specifically designed with keeping IoT
connectivity in mind.
Implementation of IoT into Marketing - Marketing companies all around the world are
making great use of IoT developments and Big Data to target their clients. Advanced
analytics make it possible to target niches in better and effective ways.
Examples of IoT Platforms
There are several IoT Platforms available that provides facility to deploy IoT application
actively. Some of them are listed below:
Amazon Web Services (AWS) IoT platform: Amazon Web Service IoT platform offers a
set of services that connect to several devices and maintain the security as well. This platform
collects data from connected devices and performs real-time actions.
Microsoft Azure IoT platform: Microsoft Azure IoT platform offers strong security
mechanism, scalability and easy integration with systems. It uses standard protocols that
support bi-directional communication between connected devices and platform. Azure IoT
platform has an Azure Stream Analytics that processes a large amount of information in real-
time generated by sensors. Some common features provided by this platform are:
o Information monitoring
o A rules engine
o Device shadowing
o Identity registry
Google Cloud Platform IoT: Google Cloud Platform is a global cloud platform that
provides a solution for IoT devices and applications. It handles a large amount of data using
Cloud IoT Core by connecting various devices. It allows to apply BigQuery analysis or to
apply Machine learning on this data. Some of the features provided by Google Cloud IoT
Platform are:
o Cloud IoT Core
o Speed up IoT devices
o Cloud publisher-subscriber
o Cloud Machine Learning Engine
IBM Watson IoT platform: The IBM Watson IoT platform enables the developer to deploy
the application and building IoT solutions quickly. This platform provides the following
services:
o Real-time data exchange
o Device management
o Secure Communication
o Data sensor and weather data services
Artik Cloud IoT platform: Arthik cloud IoT platform is developed by Samsung to enable
devices to connect to cloud services. It has a set of services that continuously connect devices
to the cloud and start gathering data. It stores the incoming data from connected devices and
combines this information. This platform contains a set of connectors that connect to third-
party services.
Bosch IoT Suite:
Bosch cloud IoT Suit is based on Germany. It offers safe and reliable storing of data on its
server in Germany. This platform supports full app development from prototype to
application development.
How IoT platform help:
o IoT Platform connects sensors and devices.
o IoT platform handles different software communication protocol and hardware.
o IoT platform provides security and authentication for sensors and users.
o It collects, visualizes, and analyzes the data gathered by the sensor and device.
Block chain is a constantly growing ledger that keeps a permanent record of all the
transactions that have taken place in a secure, chronological, and immutable way. It can be
used for the secure transfer of money, property, contracts, etc. without requiring a third-party
intermediary such as bank or government. Block chain is a software protocol, but it could not
be run without the Internet.
Blockchain Version
The evolution of blockchain technology and its versioning from 1.0 to 3.0 are explained
below.
Blockchain 1.0: Currency
The idea of creating money through solving computational puzzles was first introduced
in 2005 by Hal Finney, who created the first concept for cryptocurrencies (The
implementation of distributed ledger technology). This ledger allows financial transactions
based on blockchain technology or DLT to be executed with Bitcoin. Bitcoin is the most
prominent example in this segment. It is being used as cash for the Internet and seen as the
enabler of an Internet of Money.
Blockchain 2.0: Smart Contracts
The main issues that came with Bitcoin are wasteful mining and lack of network scalability.
To overcome these issues, this version extends the concept of Bitcoin beyond currency. The
new key concepts are Smart Contracts. It is small computer programs that "live" in
the blockchain. They are free computer programs which executed automatically and checked
conditions which are defined earlier like facilitation, verification or enforcement. The big
advantage of this technology that blockchain offers, making it impossible to tamper or hack
Smart Contracts.
Quickly, the blockchain 2.0 version is successfully processing a high number of daily
transactions on a public network, where millions were raised through ICO (Initial Coin
Offerings), and the market cap increased rapidly.
Blockchain 3.0: DApps
DApps is also known as a decentralized application. It uses decentralized storage and
communication. Its backend code is running on a decentralized peer-to-peer network. A
DApp can have frontend code hosted on decentralized storages such as Ethereum Swarm
and user interfaces written in any language that can make a call to its backend like a
traditional Apps.
Blockchain Key Areas
In the blockchain technology, bitcoin is the best-known implementation of the blockchain.
There is a lot of development and the direction is based on the premise of what blockchain
does to enable Bitcoin to happen. We can learn and expand how it can spread into so many
different areas.
The blockchain technology
fixes three things that the Internet was not
designed to do. They are
1. Value
2. Trust
3. Reliability
Value
With blockchain, you can actually create value on a digital asset. The value can be controlled by
that person who owns it. It enables a unique asset to be transferred over the internet without a
middle centralized agent.
Trust
Blockchain enables to securely assign ownership of a specific digital asset and be able to track
who actually controls that asset at a time. In other words, blockchain creates a permanent, secure,
unalterable record of who owns what. It uses advanced hash cryptography to preserve the
integrity of the information.
Reliability
Blockchain distributes their workload among thousands of different computers worldwide. It
provides reliability because if you have everything localized in one location, it becomes a single
point of failure. But, its decentralized network structure ensures that there is no single point of
failure which could bring the entire system down.
Limitation of Blockchain Technology
Blockchain technology has enormous potential in creating trustless, decentralized applications.
But it is not perfect. There are certain barriers which make the blockchain technology not the
right choice and unusable for mainstream application. We can see the limitations of blockchain
technology in the following image.
Lack of Awareness
There is a lot of discussion about blockchain, but people do not know the true value of
blockchain and how they could implement it in different situations.
Limited availability of technical talent
Today, there are a lot of developers available who can do a lot of different things in every field.
But in the blockchain technology, there are not so many developers available who have
specialized expertise in blockchain technology. Hence, the lack of developers is a hindrance to
developing anything on the blockchain.
Immutable
In immutable, we cannot make any modifications to any of the records. It is very helpful if you
want to keep the integrity of a record and make sure that nobody ever tampers with it. But
immutability also has a drawback.
Key Management
As we know, blockchain is built on cryptography, which implies that there are different keys,
such as public keys and private keys. When you are dealing with a private key, then you are also
running the risk that somebody may lose access to your private key. It happens a lot in the early
days when bitcoin wasn't worth that much. People would just collect a lot of bitcoin, and then
suddenly forgot what the key was, and those may be worth millions of dollars today.
Scalability
Blockchain like bitcoin has consensus mechanisms which require every participating node to
verify the transaction. It limits the number of transactions a blockchain network can process. So
bitcoin was not developed to do the large scale volumes of transactions that many of the other
institutions are doing. Currently, bitcoin can process a maximum of seven transactions per
second.
Consensus Mechanism
In the blockchain, we know that a block can be created in every 10 minutes. It is because every
transaction made must ensure that every block in the blockchain network must reach a common
consensus. Depending on the network size and the number of blocks or nodes involved in a
blockchain, the back-and-forth communications involved to attain a consensus can consume a
considerable amount of time and resources.
Crypto Currency
What Is Cryptocurrency?
A cryptocurrency is a digital or virtual currency that is secured by cryptography, which makes it
nearly impossible to counterfeit or double-spend. Many cryptocurrencies are decentralized
networks based on blockchain technology—a distributed ledger enforced by a disparate network
of computers. A defining feature of cryptocurrencies is that they are generally not issued by any
central authority, rendering them theoretically immune to government interference or
manipulation.
Cryptocurrency – meaning and definition
Cryptocurrency, sometimes called crypto-currency or crypto, is any form of currency that exists
digitally or virtually and uses cryptography to secure transactions. Cryptocurrencies don't have a
central issuing or regulating authority, instead using a decentralized system to record transactions
and issue new units.
What is cryptocurrency?
Cryptocurrency is a digital payment system that doesn't rely on banks to verify transactions. It’s
a peer-to-peer system that can enable anyone anywhere to send and receive payments. Instead of
being physical money carried around and exchanged in the real world, cryptocurrency payments
exist purely as digital entries to an online database describing specific transactions. When you
transfer cryptocurrency funds, the transactions are recorded in a public ledger. Cryptocurrency is
stored in digital wallets.
Cryptocurrency received its name because it uses encryption to verify transactions. This means
advanced coding is involved in storing and transmitting cryptocurrency data between wallets and
to public ledgers. The aim of encryption is to provide security and safety.
The first cryptocurrency was Bitcoin, which was founded in 2009 and remains the best known
today. Much of the interest in cryptocurrencies is to trade for profit, with speculators at times
driving prices skyward.
How does cryptocurrency work?
Cryptocurrencies run on a distributed public ledger called blockchain, a record of all transactions
updated and held by currency holders.
Units of cryptocurrency are created through a process called mining, which involves using
computer power to solve complicated mathematical problems that generate coins.
Users can also buy the currencies from brokers, then store and spend them using cryptographic
wallets. If you own cryptocurrency, you don’t own anything tangible. What you own is a key
that allows you to move a record or a unit of measure from one person to another without a
trusted third party.
Blockchain
Central to the appeal and functionality of Bitcoin and other cryptocurrencies is blockchain
technology. As its name indicates, blockchain is essentially a set of connected blocks or an
online ledger. Each block contains a set of transactions that have been independently verified
by each member of the network. Every new block generated must be verified by each node
before being confirmed, making it almost impossible to forge transaction histories.
Types of Cryptocurrency
Bitcoin is the most popular and valuable cryptocurrency. An anonymous person called Satoshi
Nakamoto invented it and introduced it to the world via a white paper in 2008. There are
thousands of cryptocurrencies present in the market today.
Non-Bitcoin cryptocurrencies are collectively known as “altcoins” to distinguish them from the
original.
Each cryptocurrency claims to have a different function and specification. For
example, Ethereum's ether markets itself as gas for the underlying smart
contract platform. Ripple's XRP is used by banks to facilitate transfers between different
geographies.
In the wake of Bitcoin's success, many other cryptocurrencies, known as "altcoins," have been
launched. Some of these are clones or forks of Bitcoin, while others are new currencies that
were built from scratch. They include Solana, Litecoin, Ethereum, Cardano, and EOS. By
November 2021, the aggregate value of all the cryptocurrencies in existence had reached over
$2.1 trillion—Bitcoin represented approximately 41% of that total value.
Exampls of Cryptocurrencies
There are thousands of cryptocurrencies. Some of the best known include:
Bitcoin:
Founded in 2009, Bitcoin was the first cryptocurrency and is still the most commonly traded.
The currency was developed by Satoshi Nakamoto – widely believed to be a pseudonym for an
individual or group of people whose precise identity remains unknown.
Ethereum:
Developed in 2015, Ethereum is a blockchain platform with its own cryptocurrency, called Ether
(ETH) or Ethereum. It is the most popular cryptocurrency after Bitcoin.
Litecoin:
This currency is most similar to bitcoin but has moved more quickly to develop new innovations,
including faster payments and processes to allow more transactions.
Ripple:
Ripple is a distributed ledger system that was founded in 2012. Ripple can be used to track
different kinds of transactions, not just cryptocurrency. The company behind it has worked with
various banks and financial institutions.
How to buy cryptocurrency
There are typically three steps involved in buying cryptocurrency safely. These are:
Step 1: Choosing a platform
The first step is deciding which platform to use. Generally, you can choose between a traditional
broker or dedicated cryptocurrency exchange:
Traditional brokers. These are online brokers who offer ways to buy and sell
cryptocurrency, as well as other financial assets like stocks, bonds, and ETFs. These
platforms tend to offer lower trading costs but fewer crypto features.
Cryptocurrency exchanges. There are many cryptocurrency exchanges to choose from,
each offering different cryptocurrencies, wallet storage, interest-bearing account options,
and more. Many exchanges charge asset-based fees.
When comparing different platforms, consider which cryptocurrencies are on offer, what fees
they charge, their security features, storage and withdrawal options, and any educational
resources.
Step 2: Funding your account
Once you have chosen your platform, the next step is to fund your account so you can begin
trading. Most crypto exchanges allow users to purchase crypto using fiat (i.e., government-
issued) currencies such as the US Dollar, the British Pound, or the Euro using their debit or
credit cards – although this varies by platform.
Crypto purchases with credit cards are considered risky, and some exchanges don't support them.
Some credit card companies don't allow crypto transactions either. This is because
cryptocurrencies are highly volatile, and it is not advisable to risk going into debt — or
potentially paying high credit card transaction fees — for certain assets.
Some platforms will also accept ACH transfers and wire transfers. The accepted payment
methods and time taken for deposits or withdrawals differ per platform. Equally, the time taken
for deposits to clear varies by payment method.
An important factor to consider is fees. These include potential deposit and withdrawal
transaction fees plus trading fees. Fees will vary by payment method and platform, which is
something to research at the outset.
Step 3: Placing an order
You can place an order via your broker's or exchange's web or mobile platform. If you are
planning to buy cryptocurrencies, you can do so by selecting "buy," choosing the order type,
entering the amount of cryptocurrencies you want to purchase, and confirming the order. The
same process applies to "sell" orders.
There are also other ways to invest in crypto. These include payment services like PayPal,
Cash App, and Venmo, which allow users to buy, sell, or hold cryptocurrencies. In addition,
there are the following investment vehicles:
Bitcoin trusts: You can buy shares of Bitcoin trusts with a regular brokerage account.
These vehicles give retail investors exposure to crypto through the stock market.
Bitcoin mutual funds: There are Bitcoin ETFs and Bitcoin mutual funds to choose
from.
Blockchain stocks or ETFs: You can also indirectly invest in crypto through blockchain
companies that specialize in the technology behind crypto and crypto transactions.
Alternatively, you can buy stocks or ETFs of companies that use blockchain technology.
The best option for you will depend on your investment goals and risk appetite.
How to store cryptocurrency
Once you have purchased cryptocurrency, you need to store it safely to protect it from hacks or
theft. Usually, cryptocurrency is stored in crypto wallets, which are physical devices or online
software used to store the private keys to your cryptocurrencies securely. Some exchanges
provide wallet services, making it easy for you to store directly through the platform. However,
not all exchanges or brokers automatically provide wallet services for you.
There are different wallet providers to choose from. The terms “hot wallet” and “cold wallet” are
used:
Hot wallet storage: "hot wallets" refer to crypto storage that uses online software to
protect the private keys to your assets.
Cold wallet storage: Unlike hot wallets, cold wallets (also known as hardware wallets)
rely on offline electronic devices to securely store your private keys.
Typically, cold wallets tend to charge fees, while hot wallets don't.
What can you buy with cryptocurrency?
When it was first launched, Bitcoin was intended to be a medium for daily transactions, making
it possible to buy everything from a cup of coffee to a computer or even big-ticket items like real
estate. That hasn’t quite materialized and, while the number of institutions accepting
cryptocurrencies is growing, large transactions involving it are rare. Even so, it is possible to buy
a wide variety of products from e-commerce websites using crypto. Examples:
Technology and e-commerce sites:
Several companies that sell tech products accept crypto on their websites, such as newegg.com,
AT&T, and Microsoft. Overstock, an e-commerce platform, was among the first sites to accept
Bitcoin. Shopify, Rakuten, and Home Depot also accept it.
Luxury goods:
Some luxury retailers accept crypto as a form of payment. For example, online luxury retailer
Bitdials offers Rolex, Patek Philippe, and other high-end watches in return for Bitcoin.
Cars:
Some car dealers – from mass-market brands to high-end luxury dealers – already accept
cryptocurrency as payment.
Insurance:
In April 2021, Swiss insurer AXA announced that it had begun accepting Bitcoin as a mode of
payment for all its lines of insurance except life insurance (due to regulatory issues). Premier
Shield Insurance, which sells home and auto insurance policies in the US, also accepts Bitcoin
for premium payments.
If you want to spend cryptocurrency at a retailer that doesn’t accept it directly, you can use a
cryptocurrency debit card, such as BitPay in the US.
Cryptocurrency fraud and cryptocurrency scams
Unfortunately, cryptocurrency crime is on the rise. Cryptocurrency scams include:
Fake websites: Bogus sites which feature fake testimonials and crypto jargon promising
massive, guaranteed returns, provided you keep investing.
Virtual Ponzi schemes: Cryptocurrency criminals promote non-existent opportunities to invest
in digital currencies and create the illusion of huge returns by paying off old investors with new
investors’ money. One scam operation, BitClub Network, raised more than $700 million before
its perpetrators were indicted in December 2019.
"Celebrity" endorsements: Scammers pose online as billionaires or well-known names who
promise to multiply your investment in a virtual currency but instead steal what you send. They
may also use messaging apps or chat rooms to start rumours that a famous businessperson is
backing a specific cryptocurrency. Once they have encouraged investors to buy and driven up the
price, the scammers sell their stake, and the currency reduces in value.
Romance scams: The FBI warns of a trend in online dating scams, where tricksters persuade
people they meet on dating apps or social media to invest or trade in virtual currencies. The
FBI’s Internet Crime Complaint Centre fielded more than 1,800 reports of crypto-focused
romance scams in the first seven months of 2021, with losses reaching $133 million.
Otherwise, fraudsters may pose as legitimate virtual currency traders or set up bogus exchanges
to trick people into giving them money. Another crypto scam involves fraudulent sales pitches
for individual retirement accounts in cryptocurrencies. Then there is straightforward
cryptocurrency hacking, where criminals break into the digital wallets where people store their
virtual currency to steal it.
Four tips to invest in cryptocurrency safely
According to Consumer Reports, all investments carry risk, but some experts consider
cryptocurrency to be one of the riskier investment choices out there. If you are planning to invest
in cryptocurrencies, these tips can help you make educated choices.
Research exchanges:
Before you invest, learn about cryptocurrency exchanges. It’s estimated that there are over 500
exchanges to choose from. Do your research, read reviews, and talk with more experienced
investors before moving forward.
Know how to store your digital currency:
If you buy cryptocurrency, you have to store it. You can keep it on an exchange or in a digital
wallet. While there are different kinds of wallets, each has its benefits, technical requirements,
and security. As with exchanges, you should investigate your storage choices before investing.
Diversify your investments:
Diversification is key to any good investment strategy, and this holds true when you are
investing in cryptocurrency. Don't put all your money in Bitcoin, for example, just because that's
the name you know. There are thousands of options, and it's better to spread your investment
across several currencies.
Prepare for volatility:
The cryptocurrency market is highly volatile, so be prepared for ups and downs. You will see
dramatic swings in prices. If your investment portfolio or mental wellbeing can't handle that,
cryptocurrency might not be a wise choice for you.
Cryptocurrency is all the rage right now, but remember, it is still in its relative infancy and is
considered highly speculative. Investing in something new comes with challenges, so be
prepared. If you plan to participate, do your research, and invest conservatively to start.
One of the best ways you can stay safe online is by using a comprehensive antivirus. Kaspersky
Internet Security defends you from malware infections, spyware, data theft and protects your
online payments using bank-grade encryption.
Advantages and Disadvantages of Cryptocurrency
Cryptocurrencies were introduced with the intent to revolutionize financial infrastructure. As
with every revolution, however, there are tradeoffs involved. At the current stage of
development for cryptocurrencies, there are many differences between the theoretical ideal of a
decentralized system with cryptocurrencies and its practical implementation.
Advantages
Cryptocurrencies represent a new, decentralized paradigm for money. In this system,
centralized intermediaries, such as banks and monetary institutions, are not necessary to
enforce trust and police transactions between two parties. Thus, a system with
cryptocurrencies eliminates the possibility of a single point of failure.
Cryptocurrencies promise to make it easier to transfer funds directly between two
parties, without the need for a trusted third party like a bank or a credit card company.
Such decentralized transfers are secured by the use of public keys and private keys and
different forms of incentive systems, such as proof of work or proof of stake.
Because they do not use third-party intermediaries, cryptocurrency transfers between
two transacting parties are faster as compared to standard money transfers.
Cryptocurrency investments can generate profits. Cryptocurrency markets have
skyrocketed in value over the past decade, at one point reaching almost $2 trillion.
Bitcoin serve as intermediate currencies to streamline money transfers across borders.
Thus, a fiat currency is converted to Bitcoin, transferred across borders and,
subsequently, converted to the destination fiat currency. This method streamlines the
money transfer process and makes it cheaper.
Disadvantages
Cryptocurrencies are actually pseudonymous. They leave a digital trail that agencies
such as the Federal Bureau of Investigation (FBI) can decipher. This opens up
possibilities of governments or federal authorities tracking the financial transactions of
ordinary citizens.
Cryptocurrencies have become a popular tool with criminals for nefarious activities such
as money laundering and illicit purchases. Cryptocurrencies have also become a favorite
of hackers who use them for ransomware activities.
In theory, cryptocurrencies are meant to be decentralized, their wealth distributed
between many parties on a blockchain. In reality, ownership is highly concentrated.
One of the conceits of cryptocurrencies is that anyone can mine them using a computer
with an Internet connection. However, mining popular cryptocurrencies requires
considerable energy, sometimes as much energy as entire countries consume.
Though cryptocurrency blockchains are highly secure, other crypto repositories, such as
exchanges and wallets, can be hacked.
Cryptocurrencies traded in public markets suffer from price volatility. Bitcoin has
experienced rapid surges and crashes in its value.
Quantum Computing
It is the use of quantum mechanics to run calculations on specialised hardware.
Quantum computing is an area of computing focused on developing computer technology based
on the principles of quantum theory (which explains the behavior of energy and material on the
atomic and subatomic levels).
Computers used today can only encode information in bits that take the value of 1 or 0—
restricting their ability. On the other hand, Quantum computing uses quantum bits or qubits. It
harnesses the unique ability of subatomic particles that allows them to exist in more than one
state (i.e., a 1 and a 0 at the same time).
To fully define quantum computing, we need to define some key terms first.
What is quantum?
The quantum in "quantum computing" refers to the quantum mechanics that the system uses to
calculate outputs. In physics, a quantum is the smallest possible discrete unit of any physical
property. It usually refers to properties of atomic or subatomic particles, such as electrons,
neutrinos and photons.
What is a qubit?
A qubit is the basic unit of information in quantum computing. Qubits play a similar role in
quantum computing as bits play in classical computing, but they behave very differently.
Classical bits are binary and can hold only a position of 0 or 1, but qubits can hold a
superposition of all possible states.
What Is Quantum Computing?
Quantum computers harness the unique behaviour of quantum physics—such as
Superposition,
Entanglement and
Quantum interference
and apply it to computing.
Superposition
In superposition, quantum particles are a combination of all possible states. They fluctuate until
they are observed and measured. One way to picture the difference between binary position and
superposition is to imagine a coin. Classical bits are measured by "flipping the coin" and getting
heads or tails. However, if you were able to look at a coin and see both heads and tails at the
same time, as well as every state in between, the coin would be in superposition.
Entanglement
Entanglement is the ability of quantum particles to correlate their measurement results with each
other. When qubits are entangled, they form a single system and influence each other. We can
use the measurements from one qubit to draw conclusions about the others. By adding and
entangling more qubits in a system, quantum computers can calculate exponentially more
information and solve more complicated problems.
Quantum interference
Quantum interference is the intrinsic behaviour of a qubit, due to superposition, to influence the
probability of it collapsing one way or another. Quantum computers are designed and built to
reduce interference as much as possible and ensure the most accurate results. To this end,
Microsoft uses topological qubits, which are stabilised by manipulating their structure and
surrounding them with chemical compounds that protect them from outside interference.
The Uses
Quantum applications are transforming how we live, work and play. Technologies like
quantum sensors, quantum computers and quantum information security are emerging
from labs around the world, and we are already seeing the tremendous possibilities.
Quantum computing could contribute greatly in the fields of finance, military affairs and
intelligence, drug design and discovery, aerospace designing, utilities (nuclear fusion), polymer
design, machine learning and artificial intelligence (AI) and Big Data search, and digital
manufacturing.
How does quantum computing work?
A quantum computer has three primary parts:
An area that houses the qubits
A method for transferring signals to the qubits
A classical computer to run a program and send instructions
For some methods of qubit storage, the unit that houses the qubits is kept at a temperature just
above absolute zero to maximise their coherence and reduce interference. Other types of qubit
housing use a vacuum chamber to help minimise vibrations and stabilise the qubits.
Signals can be sent to the qubits using a variety of methods, including microwaves, laser and
voltage.
Quantum computer uses and application areas
A quantum computer cannot do everything faster than a classical computer, but there are a few
areas where quantum computers have the potential to make a big impact.
Quantum simulation
Quantum computers work exceptionally well for modelling other quantum systems because they
use quantum phenomena in their computation. This means that they can handle the complexity
and ambiguity of systems that would overload classical computers. Examples of quantum
systems that we can model include photosynthesis, superconductivity and complex molecular
formations.
Cryptography
Classical cryptography—such as the Rivest–Shamir–Adleman (RSA) algorithm that is widely
used to secure data transmission—relies on the intractability of problems such as integer
factorisation or discrete logarithms. Many of these problems can be solved more efficiently using
quantum computers.
Optimisation
Optimisation is the process of finding the best solution to a problem given its desired outcome
and constraints. In science and industry, critical decisions are made based on factors such as cost,
quality and production time—all of which can be optimised. By running quantum-inspired
optimisation algorithms on classical computers, we can find solutions that were previously
impossible. This helps us find better ways to manage complex systems such as traffic flows,
airplane gate assignments, package deliveries and energy storage.
Quantum machine learning
Machine learning on classical computers is revolutionising the world of science and business.
However, training machine learning models comes with a high computational cost and that has
hindered the scope and development of the field. To speed up progress in this area, we are
exploring ways to devise and implement quantum software that enables faster machine learning.
Search
A quantum algorithm developed in 1996 dramatically sped up the solution to unstructured data
searches, running the search in fewer steps than any classical algorithm could.
Real-World Example of a Quantum Computer
Google (GOOG) is spending billions of dollars on its plan to build its quantum computer by
2029. The company has opened a campus in California, called Google AI, to help it meet its
goal. Google has been investing in this technology for years. As well, so have other companies,
such as Honeywell International (HON) and International Business Machine (IBM). IBM
expects to hit major quantum computing milestones in the coming years.