Chapter 2
Chapter 2
(EMTE1012)
Chapter 2
Data Science
1
Outlines of Discussion
What is Data Science
What are Data and Information
Data processing cycle
Data Science and their representation
Data Value Chain
Basic concept of Big Data
Cluster Computing
Hadoop Ecosystem (High Availability Distributed Object Oriented
Platform)
Big Data life Cycle with Hadoop
2
Data Science
• Data science is a multi-disciplinary field that uses scientific methods,
processes, algorithms, and systems to extract knowledge and insights
from structured, semi-structured and unstructured data.
• Data science is much more than simply analyzing data.
• It offers a range of roles and requires a range of skills.
Big Data includes huge volume, high velocity, and extensible variety of
data. These are 3 types:
Structured data
Semi-structured data, and
Unstructured data
3
What are data and information?
Data: facts, concepts, or instructions in a formalized manner, which should be suitable
for communication, interpretation, or processing, by human or electronic machines.
unprocessed facts and figures.
Represented with the help of characters such as alphabets (A-Z, a-z), digits (0-9) or
special characters (+, -, /, *, <,>, =, etc.).
Information: processed data on which decisions and actions are based.
data that has been processed into a form that is meaningful to the recipient and is of
real or perceived value in the current or the prospective action or decision of
recipient.
Interpreted data; created from organized, structured, and processed data in a
particular context.
4
Data Processing Cycle
The re-structuring or re-ordering of data by people or machines to increase their
usefulness and add values for a particular purpose.
Basic steps - input, processing, and output.
5
Cont. …
Input:
Prepared in some convenient form for processing.
Form will depend on the processing machine.
example, when electronic computers are used, the input data can be recorded on
any one of the several types of storage medium, such as hard disk, CD, flash disk
and so on.
Processing
The input data is changed to produce data in a more useful form.
example, interest can be calculated on deposit to a bank, or a summary of sales
for the month can be calculated from the sales orders.
Output
The result of the proceeding processing step is collected.
The particular form of the output data depends on the use of the data. For
example, output data may be payroll for employees.
6
Data types and their representation
• Data types can be described from diverse perspectives.
• In computer science and computer programming, a data type is simply an attribute of
data that tells the compiler or interpreter how the programmer intends to use the data.
Data types from Computer programming perspective
Common data types include:
Integers(int)- used to store whole numbers, mathematically known as integers
Booleans(bool)- used to represent restricted to one of two values: true or false
Characters(char)- used to store a single character
Floating-point numbers(float)- used to store real numbers.
Alphanumeric strings(string)- used to store a combination of characters and
numbers
7
Cont. …
Data types from Data Analytics perspective
• From a data analytics point of view, it is important to understand that there are three
common types of data types or structures: Structured, Unstructured and Semi-
structured
8
Cont. …
Structured Data:
data that adheres to a pre-defined data model and is therefore straightforward to
analyze.
Structured data conforms to a tabular format with a relationship between the
different rows and columns.
Common examples of structured data are Excel files or SQL databases.
Each of these has structured rows and columns that can be sorted.
Semi-structured Data:
form of structured data that does not conform with the formal structure of data
models associated with relational databases or other forms of data tables
but nonetheless, contains tags or other markers to separate semantic elements and
enforce hierarchies of records and fields within the data.
also known as a self-describing structure.
Examples of semi-structured data include JSON (JavaScript Object Notation)and
XML (Extensible Markup Language) are forms of semi-structured data.
9
Cont. …
Unstructured Data:
Information that either does not have a predefined data model or is not organized
in a pre-defined manner.
Typically text-heavy but may contain data such as dates, numbers, and facts as
well.
This results in irregularities and ambiguities that make it difficult to understand
using traditional programs as compared to data stored in structured databases.
Common examples of unstructured data include audio, video files or No-SQL
databases.
10
Cont. …
Metadata:
The last category of data type is metadata.
Metadata is data about data.
It provides additional information about a specific set of data.
In a set of photographs, for example, metadata could describe when and where
the photos were taken.
The metadata then provides fields for dates and locations which, by themselves,
can be considered structured data.
Metadata is frequently used by Big Data solutions for initial analysis.
11
Data value Chain
Introduced to describe the information flow within a big data system as a series of
steps needed to generate value and useful insights from data.
The Big Data Value Chain identifies the following key high-level activities
12
Data value Chain …
Data Acquisition
It is the process of gathering, filtering, and cleaning data before it is put in a data
warehouse or any other storage solution on which data analysis can be carried out.
It is one of the major big data challenges in terms of infrastructure requirements.
The infrastructure required to support the acquisition of big data must deliver low,
predictable latency in both capturing data
Executing queries; be able to handle very high transaction volumes, often in a
distributed environment; and support flexible and dynamic data structures.
13
Data value Chain …
Data Analysis
It is concerned with making the raw data acquired amenable to use in decision-
making as well as domain-specific usage.
Involves exploring, transforming, and modeling data with the goal of highlighting
relevant data, synthesizing and extracting useful hidden information with high
potential from a business point of view.
Related areas include data mining, business intelligence, and machine learning.
14
Data value Chain …
Data Curation
Active management of data over its life cycle to ensure it meets the necessary data
quality requirements for its effective usage.
Data curation processes can be categorized into different activities such as content
creation, selection, classification, transformation, validation, and preservation.
It performed by expert curators that are responsible for improving the accessibility and
quality of data.
Also known as scientific curators or data annotators hold the responsibility of
ensuring that data are trustworthy, discoverable, accessible, reusable and fit their
purpose.
A key trend for the duration of big data utilizes community and crowdsourcing
approaches.
15
Data value Chain …
Data Storage
It is the persistence and management of data in a scalable way that satisfies the needs
of applications that require fast access to the data.
Relational Database Management Systems (RDBMS) have been the main, and almost
unique, a solution to the storage paradigm for nearly 40 years.
ACID (Atomicity, Consistency, Isolation, and Durability) properties that guarantee
database transactions lack flexibility with regard to schema changes and the
performance and fault tolerance when data volumes and complexity grow, making
them unsuitable for big data scenarios.
No SQL technologies have been designed with the scalability goal in mind and present
a wide range of solutions based on alternative data models.
16
Data value Chain …
Data Usage
It covers the data-driven business activities that need access to data, its analysis, and
the tools needed to integrate the data analysis within the business activity.
Data usage in business decision-making can enhance competitiveness through:
The reduction of costs
Increased added value
Any other parameter that can be measured against existing performance criteria.
17
Basic concepts of big data
A blanket term for the non-traditional strategies and technologies needed to gather,
organize, process, and gather insights from large datasets.
While the problem of working with data that exceeds the computing power or storage
of a single computer is not new, the pervasiveness, scale, and value of this type of
computing have greatly expanded in recent years.
What Is Big Data?
Collection of data sets so large and complex that it becomes difficult to process using
on-hand database management tools or traditional data processing applications.
In this context, a “large dataset” means a dataset too large to reasonably process or
store with traditional tooling or on a single computer.
Big Data is large amount of data which consists of structure, unstructured data,
that cannot be stored or processed by traditional data storage techniques.
18
Cont. …
Big data is characterized by 3V and more:
Volume: large amounts of data Zeta bytes/Massive datasets
Velocity: Data is live streaming or in motion
Variety: data comes in many different forms from diverse sources
Veracity: can we trust the data? How accurate is it? etc.
19
Clustered Computing and Hadoop Ecosystem
Clustered Computing
• Because of the qualities of big data, individual computers are often inadequate for
handling the data at most stages.
• To better address the high storage and computational needs of big data, computer
clusters are a better fit.
• Big data clustering software combines the resources of many smaller machines,
seeking to provide a number of benefits:
Resource Pooling: Combining the available storage space to hold data is a clear
benefit, but CPU and memory pooling are also extremely important.
• Processing large datasets requires large amounts of all three of these resources.
20
Cont. …
High Availability: Clusters can provide varying levels of fault tolerance and
availability guarantees to prevent hardware or software failures from affecting
access to data and processing.
• This becomes increasingly important as we continue to emphasize the importance of
real-time analytics.
Easy Scalability: Clusters make it easy to scale horizontally by adding additional
machines to the group.
• system can react to changes in resource requirements without expanding the
physical resources on a machine.
Using clusters requires a solution for managing cluster membership, coordinating
resource sharing, and scheduling actual work on individual nodes.
Cluster membership and resource allocation can be handled by software like
Hadoop’s YARN (which stands for Yet Another Resource Negotiator).
21
Hadoop and its Ecosystem
Hadoop is open-source framework intended to make interaction with big data easier.
Hadoop is a tool that is used to handle big data.
It is a framework that allows for the distributed processing of large datasets across
clusters of computers using simple programming models.
It is inspired by a technical document published by Google.
The four key characteristics of Hadoop are:
Economical: highly economical as ordinary computers can be used for data processing.
Reliable: reliable as it stores copies of the data on different machines and is resistant to hardware
failure.
Scalable: easily scalable both, horizontally and vertically. A few extra nodes help in scaling up the
framework.
Flexible: flexible and you can store as much structured and unstructured data as you need to and
decide to use them later.
22
Cont. …
Hadoop has an ecosystem that has evolved from its four core components:
data management
Data access
Data processing
Data storage
It is continuously growing to meet the needs of Big Data.
23
Cont. …
It comprises the following components and many others:
24
Big Data Life Cycle with Hadoop
Ingesting data into the system
• The first stage of Big Data processing is Ingest.
• The data is ingested or transferred to Hadoop from various sources such as relational
databases, systems, or local files.
• Sqoop transfers data from RDBMS to HDFS, whereas Flume transfers event data.
Processing the data in storage
• The second stage is Processing.
• In this stage, the data is stored and processed.
• The data is stored in the distributed file system, HDFS, and the NoSQL distributed
data, HBase.
• Spark and MapReduce perform data processing.
26
Cont. …
Computing and analyzing data
The third stage is to Analyze.
The data is analyzed by processing frameworks such as Pig, Hive, and Impala.
Pig converts the data using a map and reduce and then analyzes it.
Hive is also based on the map and reduce programming and is most suitable for
structured data.
Visualizing the results
The fourth stage is Access, which is performed by tools such as Hue and
Cloudera Search.
In this stage, the analyzed data can be accessed by users.
27
Questions
28