SlideShare a Scribd company logo
CSE 2015- Data Analysis and
Visualization
Module 1-Introduction to Data Analysis
Module 1: Introduction to Data Visualization [12 Hrs]
[Bloom’s Level Selected: Understand]
Data collection, Data Preparation Basic Models- Overview of
data visualization - Data Abstraction - Task Abstraction -
Analysis: Four Levels for Validation, Interacting with
Databases, Data Cleaning and Preparation, Handling Missing
Data, Data Transformation.
Python Libraries: NumPy, pandas, matplotlib, GGplot,
Introduction to pandas Data Structures .
Introducing Data
• Facts and statistics collected together for reference or analysis
• Data has to be transformed into a form that is efficient for movement or
processing.
2
Over view of Data
Analysis
• Data analysis is defined as a process of cleaning,
transforming, and modeling data to discover useful
information for business decision-making.
• The purpose of Data Analysis is to extract useful information
from data and taking the decision based upon the data
analysis.
• A simple example of Data analysis is whenever we take any
decision in our day-to-day life is by thinking about what
happened last time or what will happen by choosing that
particular decision.
4
• This is nothing but analyzing our past or future and making
decisions based on it.
• For that, we gather memories of our past or dreams of our
future.
• So that is nothing but data analysis. Now same thing analyst
does for business purposes, is called Data Analysis.
5
Data Analysis Tools
6
Data in the Real World
7
8
Data Collection
Data collection is the process of gathering and collecting
information from various sources to analyse and make informed
decisions based on the data collected. This can involve various
methods, such as surveys, interviews, experiments, and
observation.
Types of Data Collection
• Primary Data Collection
• Primary data collection is the process of gathering original and firsthand information directly from the source
or target population.
• Secondary Data Collection
• Secondary data collection is the process of gathering information from existing sources that have already
been collected and analyzed by someone else, rather than conducting new research to collect primary
data.
• Qualitative Data Collection
• Qualitative data collection is used to gather non-numerical data such as opinions, experiences, perceptions,
and feelings, through techniques such as interviews, focus groups, observations, and document analysis.
• Quantitative Data Collection
• Quantitative data collection is a used to gather numerical data that can be analyzed using statistical
methods. This data is typically collected through surveys, experiments, and other structured data collection
methods.
Data Collection Methods
• Surveys
• Surveys involve asking questions to a sample of individuals or
organizations to collect data. Surveys can be conducted in
person, over the phone, or online.
• Interviews
• Interviews involve a one-on-one conversation between the
interviewer and the respondent. Interviews can be structured or
unstructured and can be conducted in person or over the phone.
• Focus Groups
• Focus groups are group discussions that are moderated by a
facilitator. Focus groups are used to collect qualitative data on a
specific topic.
• Observation
• Observation involves watching and recording the behavior of people,
objects, or events in their natural setting. Observation can be done
overtly or covertly, depending on the research question.
• Experiments
• Experiments involve manipulating one or more variables and
observing the effect on another variable. Experiments are commonly
used in scientific research.
• Case Studies
• Case studies involve in-depth analysis of a single individual,
organization, or event. Case studies are used to gain detailed
information about a specific phenomenon.
Data Preparation
• Data preparation is the process of gathering, combining,
structuring and organizing data so it can be used in business
intelligence (BI), analytics and data visualization applications.
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
• Data Collection- Relevant data is gathered from operational systems,
data warehouses, data lakes and other data sources.
• Data Discovery and Profiling- The next step is to explore the collected
data to better understand what it contains and what needs to be done
to prepare it for the intended uses.
• Data Cleansing- Next, the identified data errors and issues are
corrected to create complete and accurate data sets.
• Data Structuring- At this point, the data needs to be modeled and
organized to meet the analytics requirements.
• Data Transformation and Enrichment.Data enrichment further
enhances and optimizes data sets as needed, through measures such as
augmenting and adding data.
• Data Validation and Publishing. In this last step, automated routines are
run against the data to validate its consistency, completeness and
accuracy. The prepared data is then stored in a data warehouse, a data
lake or another repository
Benefits of Data Preparation
• Ensure the data used in analytics applications produces reliable
results
• Identify and fix data issues that otherwise might not be detected
• Enable more informed decision-making by business executives
and operational workers
• Reduce data management and analytics costs
• Avoid duplication of effort in preparing data for use in multiple
applications
• Get a higher ROI from BI and analytics initiatives.
Overview of Data Visualization
• The purpose of visualization is to get insight, by means of
interactive graphics, into various aspects related to some process
we are interested in, such as a scientific simulation or some real-
world process.
Questions Targeted by the Visualization process
Conceptual View of Visualization Process
Data Abstraction
• Data abstraction is the process of concealing irrelevant or
unwanted data from the end user. Data Abstraction is a concept
that allows us to store and manipulate data in a more abstract,
efficient way. This type of abstraction separates the logical
representation of data from its physical storage, giving us the
ability to focus solely on the important aspects of data without
being bogged down by the details.
21
Challenges with Data Abstraction
Understanding Data Complexity
Data abstraction requires an understanding of both complex data structures
and logical rules. Although abstracting data can involve simplifying it for
easier management purposes, this doesn’t necessarily mean less
complexity.
Hiding Details while Remaining Accurate
Data abstraction is also a way to hide certain details from view without
compromising accuracy or security.
Limitations of Schemas and Abstraction Layers
When it comes to documenting large datasets, predefined schemas are
often used as an easy way to structuralize the data correctly.
• Efficiency: Abstraction allows us to manipulate data in a more
abstract way, separating logical representation from physical
storage.
• Focus on Essentials: By ignoring unnecessary details, we can
concentrate on what truly matters.
• System Efficiency: Users access relevant data without hassle,
and the system operates efficiently
Benefits of Data Abstraction
Data validation refers to the process of ensuring the accuracy and
quality of data. It is implemented by building several checks into a
system or report to ensure the logical consistency of input and
stored data.
23
What is Data Validation?
24
1. Data Type Check
• A data type check confirms that the data entered has
the correct data type.
2. Code Check
• A code check ensures that a field is selected from a valid
list of values or follows certain formatting rules.
Types of Data Validation
25
3. Range Check
A range check will verify whether input data falls within a
predefined range.
4. Format Check
Many data types follow a certain predefined format. A common
use case is date columns that are stored in a fixed format like
“YYYY-MM-DD” or “DD-MM-YYYY.” A data validation procedure
that ensures dates are in the proper format helps maintain
consistency across data and through time.
26
5. Consistency Check
• A consistency check is a type of logical check that
confirms the data’s been entered in a logically consistent
way.
6. Uniqueness Check
• Some data like IDs or e-mail addresses are unique by
nature. A database should likely have unique entries on
these fields. A uniqueness check ensures that an item is
not entered multiple times into a database.
What is Data Cleaning?
27
• Data cleaning is the process of fixing or removing incorrect,
corrupted, incorrectly formatted, duplicate, or incomplete
data within a dataset.
• If data is incorrect, outcomes and algorithms are unreliable,
even though they may look correct.
Data Cleaning vs Data Transformation?
What is the difference between Data Cleaning and Data
Transformation?
• Data cleaning is the process that removes data that does not
belong in your dataset. Data transformation is the process of
converting data from one format or structure into another.
• Transformation processes can also be referred to as data
wrangling, or data munging, transforming and mapping data
from one "raw" data form into another format for
warehousing and analyzing.
29
Step 1: Remove duplicate or irrelevant observations
Step 2: Fix structural errors
Structural errors are when you measure or transfer data and
notice strange naming conventions, typos, or incorrect
capitalization. These inconsistencies can cause mislabeled
categories or classes.
Step 3: Filter unwanted outliers
Data Cleaning Steps
30
31
Step 5: Validate and QA
At the end of the data cleaning process, you should be able to
answer these questions as a part of basic validation:
• Does the data make sense?
• Does the data follow the appropriate rules for its field?
• Does it prove or disprove your working theory, or bring any
insight to light?
• Can you find trends in the data to help you form your next
theory?
• If not, is that because of a data quality issue?
Step 4: Handle missing data
Data Transformation
1. Removing Duplicates Duplicate rows may be found in a
DataFrame for any number of reasons. Here is an example:
32
Relatedly, drop_duplicates returns a DataFrame where the
duplicated array is False:
33
Suppose we had an additional column of values and wanted to filter
duplicates only based on the 'k1' column:
2. Transforming Data Using a Function or Mapping
34
• Consider the following hypothetical data collected about
various kinds of meat:
• Suppose you wanted to add a column indicating the type of animal
that each food come from. Let’s write down a mapping of each
distinct meat type to the kind of animal:
35
36
The map method on a Series accepts a function or dict-like object containing a mapping. We
need to convert each value to lowercase using the str.lower Series method:
3. Replacing Multiple Values
If you want to replace multiple values at once, you instead pass a list and
then the substitute value:
37
To use a different replacement for each value, pass a list of
substitutes:
Copyright ©2011
Pearson Education
2-38
The argument passed can also be a dict:
4. Renaming Axis Indexes
Like values in a Series, axis labels can be similarly transformed by a
function or mapping of some form to produce new, differently
labeled objects.
39
5. Discretization and Binning
Continuous data is often discretized or otherwise separated into
“bins” for analysis. Suppose you have data about a group of people
in a study, and you want to group them into discrete age buckets
ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32]
40
Let’s divide these into bins of 18 to 25, 26 to 35, 36 to 60, and finally
61 and older. To do so, you have to use cut, a function in pandas:
41
6. Detecting and Filtering Outliers
Filtering or transforming outliers is largely a matter of applying array
operations. Consider a DataFrame with some normally distributed
data:
42
To select all rows having a value exceeding 3 or –3, you can use
the any method on a boolean DataFrame:
43
Another type of transformation for statistical modeling or
machine learning applications is converting a categorical variable
into a “dummy” or “indicator” matrix.
44
7.Computing Indicator/Dummy Variables
NumPy is a Python package. It stands for 'Numerical Python'. It is a
library consisting of multidimensional array objects and a collection
of routines for processing of array.
Numeric, the ancestor of NumPy, was developed by Jim Hugunin.
Another package Numarray was also developed, having some
additional functionalities. In 2005, Travis Oliphant created NumPy
package by incorporating the features of Numarray into Numeric
package.
NumPy
45
Operations using NumPy
Using NumPy, a developer can perform the following
operations −
• Mathematical and logical operations on arrays.
• Fourier transforms and routines for shape manipulation.
• Operations related to linear algebra.
• NumPy has in-built functions for linear algebra and random
number generation.
46
NumPy – A Replacement for MatLab
NumPy is often used along with packages like SciPy (Scientific
Python) and Mat−plotlib (plotting library).
This combination is widely used as a replacement for MatLab, a
popular platform for technical computing.
However, Python alternative to MatLab is now seen as a more
modern and complete programming language.
It is open source, which is an added advantage of NumPy.
47
The most important object defined in NumPy is an N-dimensional array type
called ndarray. It describes the collection of items of the same type. Items in the
collection can be accessed using a zero-based index.
Every item in an ndarray takes the same size of block in the memory. Each
element in ndarray is an object of data-type object (called dtype).
The following diagram shows a relationship between ndarray, data type object
(dtype) and array scalar type −
NumPy package is imported using the following syntax −
import numpy as np
48
numpy.array(object, dtype = None, copy = True, order = None, subok =
False, ndmin = 0)
Sr.No. Parameter & Description
1 object
Any object exposing the array interface method returns an array, or any (nested) sequence.
2 dtype
Desired data type of array, optional
3 copy
Optional. By default (true), the object is copied
4 order
C (row major) or F (column major) or A (any) (default)
5 subok
By default, returned array forced to be a base class array. If true, sub-classes passed through
6 ndmin
Specifies minimum dimensions of resultant array
The above constructor takes the following parameters −
49
import numpy as np
a = np.array([1,2,3])
print a
Example:
# more than one dimensions
import numpy as np
a = np.array([[1, 2], [3, 4]])
print a
50
Example:
inimum dimensions
ort numpy as np
np.array([1, 2, 3,4,5], ndmin = 2)
t a
# dtype parameter
import numpy as np
a = np.array([1, 2, 3],
dtype = complex)
print a
51
NumPy - Array Attributes
ndarray.shape:
This array attribute returns a tuple consisting of array dimensions. It can also be
used to resize the array.
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
print a.shape
# this resizes the ndarray
mport numpy as np
a = np.array([[1,2,3],[4,5,6]])
a.shape = (3,2)
print a
52
NumPy - Array Attributes
import numpy as np
a = np.array([[1,2,3],[4,5,6]])
b = a.reshape(3,2)
print b
ndarray.ndim:
This array attribute returns the number of array dimensions.
# an array of evenly spaced numbers
import numpy as np
a = np.arange(24)
print a
53
NumPy - Array Attributes
# this is one dimensional array
import numpy as np
a = np.arange(24) a.ndim
# now reshape it
b = a.reshape(2,4,3)
print b
# b is having three dimensions
numpy.itemsize:
This array attribute returns the length of each element of array in bytes.
NumPy - Array Attributes
numpy.itemsize:
This array attribute returns the length of each element of array in bytes.
# dtype of array is int8 (1 byte)
import numpy as np
x = np.array([1,2,3,4,5], dtype = np.int8)
print x.itemsize
# dtype of array is now float32 (4 bytes)
import numpy as np
x = np.array([1,2,3,4,5], dtype = np.float32)
print x.itemsize
NumPy - Array Creation Routines
A new ndarray object can be constructed by any of the following array creation
routines or using a low-level ndarray constructor.
numpy.empty:
It creates an uninitialized array of specified shape and dtype. It uses the following
constructor −
numpy.empty(shape, dtype = float, order = 'C')
Sr.No. Parameter & Description
1 Shape: Shape of an empty array in int or tuple of int
2 Dtype: Desired output data type. Optional
3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column-
major array
NumPy - Array Creation Routines
A new ndarray object can be constructed by any of the following array creation
routines or using a low-level ndarray constructor.
numpy.empty:
It creates an uninitialized array of specified shape and dtype. It uses the following
constructor −
numpy.empty(shape, dtype = float, order = 'C')
Sr.No. Parameter & Description
1 Shape: Shape of an empty array in int or tuple of int
2 Dtype: Desired output data type. Optional
3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column-
major array
NumPy - Array Creation Routines
ort numpy as np
np.empty([3,2], dtype = int)
t x
numpy.zeros:
Returns a new array of specified size, filled with zeros.
numpy.zeros(shape, dtype = float, order = 'C')
Sr.No. Parameter & Description
1 Shape: Shape of an empty array in int or sequence of int
2 Dtype: Desired output data type. Optional
3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column-major array
NumPy - Array Creation Routines
# array of five zeros. Default dtype is float
import numpy as np
x = np.zeros(5)
print x
import numpy as np
x = np.zeros((5,), dtype = np.int)
print x
# custom type
import numpy as np x = np.zeros((2,2), dtype = [('x', 'i4'), ('y', 'i4')])
print x
numpy.ones:
Returns a new array of specified size and type, filled with ones.
numpy.ones(shape, dtype = None, order = 'C')
NumPy - Array From Existing Data
numpy.as array:
This function is similar to numpy.array except for the fact that it has fewer
parameters. This routine is useful for converting Python sequence into ndarray.
numpy.asarray(a, dtype = None, order = None)
Sr.No. Parameter & Description
1 a
Input data in any form such as list, list of tuples, tuples, tuple of tuples or tuple of
lists
2 dtype
By default, the data type of input data is applied to the resultant ndarray
3 order
C (row major) or F (column major). C is default
61
dropna( )
Drop
missing
values
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi
• Matplotlib is an open-source drawing library that
supports various drawing types
• You can generate plots, histograms, bar charts, and
other types of charts with just a few lines of code
• It’s often used in web application servers, shells, and
Python scripts
What is Matplotlib?
80
Pyplot is a Matplotlib module that provides simple functions for adding
plot elements, such as lines, images, text, etc. to the axes in the current
figure.
81
Matplotlib Subplots
You can use the subplot() method to add more than one plot in a figure.
Syntax: plt.subplots(nrows, ncols, index)
The three-integer arguments specify the number of rows and columns and the index
of the subplot grid.
82
83
84
85
Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi

More Related Content

PPT
Image Resampling Detection Based on Convolutional Neural Network Yaohua Liang...
PPTX
Data Preparation.pptx
PPTX
Introduction to data science
PPTX
001 More introduction to big data analytics
PPTX
Data analytics unit 1 aktu updated syllabus new
PPTX
Data mining techniques using generative Ai.pptx
PDF
PPTX
Introduction to Data Analytics.pptx
Image Resampling Detection Based on Convolutional Neural Network Yaohua Liang...
Data Preparation.pptx
Introduction to data science
001 More introduction to big data analytics
Data analytics unit 1 aktu updated syllabus new
Data mining techniques using generative Ai.pptx
Introduction to Data Analytics.pptx

Similar to Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi (20)

PPTX
Data Science in Python.pptx
PDF
Top 30 Data Analyst Interview Questions.pdf
PDF
Ch_2.pdf
PPTX
Data warehouse 16 data analysis techniques
PPTX
KIT601 Unit I.pptx
PPTX
Lesson 9- Data Governance and Ethics.pptx
PPTX
Introduction to Big Data Analytics
PPTX
000 introduction to big data analytics 2021
PPTX
data science, prior knowledge ,modeling, scatter plot
PDF
lec1.pdf
PPTX
finalestkddfinalpresentation-111207021040-phpapp01.pptx
PPTX
Introducition to Data scinece compiled by hu
PDF
Data Analytics Course through IIM SKILLS
PPTX
Data Science Introduction to Data Science
PPTX
Data Analytics-Unit 1 , this Is ppt for student help
PPTX
Group 1 Report CRISP - DM METHODOLOGY.pptx
PPTX
Introduction to Data mining
PPTX
AIS PPt.pptx
PDF
Lesson_1_definitions_BIG DATA INROSUCTIONUE.pdf
PDF
DataSciencePowerPointPresentationFull.pdf
Data Science in Python.pptx
Top 30 Data Analyst Interview Questions.pdf
Ch_2.pdf
Data warehouse 16 data analysis techniques
KIT601 Unit I.pptx
Lesson 9- Data Governance and Ethics.pptx
Introduction to Big Data Analytics
000 introduction to big data analytics 2021
data science, prior knowledge ,modeling, scatter plot
lec1.pdf
finalestkddfinalpresentation-111207021040-phpapp01.pptx
Introducition to Data scinece compiled by hu
Data Analytics Course through IIM SKILLS
Data Science Introduction to Data Science
Data Analytics-Unit 1 , this Is ppt for student help
Group 1 Report CRISP - DM METHODOLOGY.pptx
Introduction to Data mining
AIS PPt.pptx
Lesson_1_definitions_BIG DATA INROSUCTIONUE.pdf
DataSciencePowerPointPresentationFull.pdf
Ad

Recently uploaded (20)

PPTX
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Anesthesia in Laparoscopic Surgery in India
PPTX
Lesson notes of climatology university.
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Microbial disease of the cardiovascular and lymphatic systems
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
A systematic review of self-coping strategies used by university students to ...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PPT- ENG7_QUARTER1_LESSON1_WEEK1. IMAGERY -DESCRIPTIONS pptx.pptx
Module 4: Burden of Disease Tutorial Slides S2 2025
102 student loan defaulters named and shamed – Is someone you know on the list?
STATICS OF THE RIGID BODIES Hibbelers.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
VCE English Exam - Section C Student Revision Booklet
Microbial diseases, their pathogenesis and prophylaxis
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Anesthesia in Laparoscopic Surgery in India
Lesson notes of climatology university.
Abdominal Access Techniques with Prof. Dr. R K Mishra
Final Presentation General Medicine 03-08-2024.pptx
Supply Chain Operations Speaking Notes -ICLT Program
Microbial disease of the cardiovascular and lymphatic systems
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
A systematic review of self-coping strategies used by university students to ...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
Chinmaya Tiranga quiz Grand Finale.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
Ad

Module-1.pptxcjxifkgzkzigoyxyxoxoyztiai. Tisi

  • 1. CSE 2015- Data Analysis and Visualization Module 1-Introduction to Data Analysis Module 1: Introduction to Data Visualization [12 Hrs] [Bloom’s Level Selected: Understand] Data collection, Data Preparation Basic Models- Overview of data visualization - Data Abstraction - Task Abstraction - Analysis: Four Levels for Validation, Interacting with Databases, Data Cleaning and Preparation, Handling Missing Data, Data Transformation. Python Libraries: NumPy, pandas, matplotlib, GGplot, Introduction to pandas Data Structures .
  • 2. Introducing Data • Facts and statistics collected together for reference or analysis • Data has to be transformed into a form that is efficient for movement or processing. 2
  • 3. Over view of Data Analysis
  • 4. • Data analysis is defined as a process of cleaning, transforming, and modeling data to discover useful information for business decision-making. • The purpose of Data Analysis is to extract useful information from data and taking the decision based upon the data analysis. • A simple example of Data analysis is whenever we take any decision in our day-to-day life is by thinking about what happened last time or what will happen by choosing that particular decision. 4
  • 5. • This is nothing but analyzing our past or future and making decisions based on it. • For that, we gather memories of our past or dreams of our future. • So that is nothing but data analysis. Now same thing analyst does for business purposes, is called Data Analysis. 5
  • 7. Data in the Real World 7
  • 8. 8
  • 9. Data Collection Data collection is the process of gathering and collecting information from various sources to analyse and make informed decisions based on the data collected. This can involve various methods, such as surveys, interviews, experiments, and observation.
  • 10. Types of Data Collection • Primary Data Collection • Primary data collection is the process of gathering original and firsthand information directly from the source or target population. • Secondary Data Collection • Secondary data collection is the process of gathering information from existing sources that have already been collected and analyzed by someone else, rather than conducting new research to collect primary data. • Qualitative Data Collection • Qualitative data collection is used to gather non-numerical data such as opinions, experiences, perceptions, and feelings, through techniques such as interviews, focus groups, observations, and document analysis. • Quantitative Data Collection • Quantitative data collection is a used to gather numerical data that can be analyzed using statistical methods. This data is typically collected through surveys, experiments, and other structured data collection methods.
  • 11. Data Collection Methods • Surveys • Surveys involve asking questions to a sample of individuals or organizations to collect data. Surveys can be conducted in person, over the phone, or online. • Interviews • Interviews involve a one-on-one conversation between the interviewer and the respondent. Interviews can be structured or unstructured and can be conducted in person or over the phone. • Focus Groups • Focus groups are group discussions that are moderated by a facilitator. Focus groups are used to collect qualitative data on a specific topic.
  • 12. • Observation • Observation involves watching and recording the behavior of people, objects, or events in their natural setting. Observation can be done overtly or covertly, depending on the research question. • Experiments • Experiments involve manipulating one or more variables and observing the effect on another variable. Experiments are commonly used in scientific research. • Case Studies • Case studies involve in-depth analysis of a single individual, organization, or event. Case studies are used to gain detailed information about a specific phenomenon.
  • 13. Data Preparation • Data preparation is the process of gathering, combining, structuring and organizing data so it can be used in business intelligence (BI), analytics and data visualization applications.
  • 15. • Data Collection- Relevant data is gathered from operational systems, data warehouses, data lakes and other data sources. • Data Discovery and Profiling- The next step is to explore the collected data to better understand what it contains and what needs to be done to prepare it for the intended uses. • Data Cleansing- Next, the identified data errors and issues are corrected to create complete and accurate data sets. • Data Structuring- At this point, the data needs to be modeled and organized to meet the analytics requirements.
  • 16. • Data Transformation and Enrichment.Data enrichment further enhances and optimizes data sets as needed, through measures such as augmenting and adding data. • Data Validation and Publishing. In this last step, automated routines are run against the data to validate its consistency, completeness and accuracy. The prepared data is then stored in a data warehouse, a data lake or another repository
  • 17. Benefits of Data Preparation • Ensure the data used in analytics applications produces reliable results • Identify and fix data issues that otherwise might not be detected • Enable more informed decision-making by business executives and operational workers • Reduce data management and analytics costs • Avoid duplication of effort in preparing data for use in multiple applications • Get a higher ROI from BI and analytics initiatives.
  • 18. Overview of Data Visualization • The purpose of visualization is to get insight, by means of interactive graphics, into various aspects related to some process we are interested in, such as a scientific simulation or some real- world process. Questions Targeted by the Visualization process
  • 19. Conceptual View of Visualization Process
  • 20. Data Abstraction • Data abstraction is the process of concealing irrelevant or unwanted data from the end user. Data Abstraction is a concept that allows us to store and manipulate data in a more abstract, efficient way. This type of abstraction separates the logical representation of data from its physical storage, giving us the ability to focus solely on the important aspects of data without being bogged down by the details.
  • 21. 21 Challenges with Data Abstraction Understanding Data Complexity Data abstraction requires an understanding of both complex data structures and logical rules. Although abstracting data can involve simplifying it for easier management purposes, this doesn’t necessarily mean less complexity. Hiding Details while Remaining Accurate Data abstraction is also a way to hide certain details from view without compromising accuracy or security. Limitations of Schemas and Abstraction Layers When it comes to documenting large datasets, predefined schemas are often used as an easy way to structuralize the data correctly.
  • 22. • Efficiency: Abstraction allows us to manipulate data in a more abstract way, separating logical representation from physical storage. • Focus on Essentials: By ignoring unnecessary details, we can concentrate on what truly matters. • System Efficiency: Users access relevant data without hassle, and the system operates efficiently Benefits of Data Abstraction
  • 23. Data validation refers to the process of ensuring the accuracy and quality of data. It is implemented by building several checks into a system or report to ensure the logical consistency of input and stored data. 23 What is Data Validation?
  • 24. 24 1. Data Type Check • A data type check confirms that the data entered has the correct data type. 2. Code Check • A code check ensures that a field is selected from a valid list of values or follows certain formatting rules. Types of Data Validation
  • 25. 25 3. Range Check A range check will verify whether input data falls within a predefined range. 4. Format Check Many data types follow a certain predefined format. A common use case is date columns that are stored in a fixed format like “YYYY-MM-DD” or “DD-MM-YYYY.” A data validation procedure that ensures dates are in the proper format helps maintain consistency across data and through time.
  • 26. 26 5. Consistency Check • A consistency check is a type of logical check that confirms the data’s been entered in a logically consistent way. 6. Uniqueness Check • Some data like IDs or e-mail addresses are unique by nature. A database should likely have unique entries on these fields. A uniqueness check ensures that an item is not entered multiple times into a database.
  • 27. What is Data Cleaning? 27 • Data cleaning is the process of fixing or removing incorrect, corrupted, incorrectly formatted, duplicate, or incomplete data within a dataset. • If data is incorrect, outcomes and algorithms are unreliable, even though they may look correct.
  • 28. Data Cleaning vs Data Transformation?
  • 29. What is the difference between Data Cleaning and Data Transformation? • Data cleaning is the process that removes data that does not belong in your dataset. Data transformation is the process of converting data from one format or structure into another. • Transformation processes can also be referred to as data wrangling, or data munging, transforming and mapping data from one "raw" data form into another format for warehousing and analyzing. 29
  • 30. Step 1: Remove duplicate or irrelevant observations Step 2: Fix structural errors Structural errors are when you measure or transfer data and notice strange naming conventions, typos, or incorrect capitalization. These inconsistencies can cause mislabeled categories or classes. Step 3: Filter unwanted outliers Data Cleaning Steps 30
  • 31. 31 Step 5: Validate and QA At the end of the data cleaning process, you should be able to answer these questions as a part of basic validation: • Does the data make sense? • Does the data follow the appropriate rules for its field? • Does it prove or disprove your working theory, or bring any insight to light? • Can you find trends in the data to help you form your next theory? • If not, is that because of a data quality issue? Step 4: Handle missing data
  • 32. Data Transformation 1. Removing Duplicates Duplicate rows may be found in a DataFrame for any number of reasons. Here is an example: 32
  • 33. Relatedly, drop_duplicates returns a DataFrame where the duplicated array is False: 33 Suppose we had an additional column of values and wanted to filter duplicates only based on the 'k1' column:
  • 34. 2. Transforming Data Using a Function or Mapping 34 • Consider the following hypothetical data collected about various kinds of meat:
  • 35. • Suppose you wanted to add a column indicating the type of animal that each food come from. Let’s write down a mapping of each distinct meat type to the kind of animal: 35
  • 36. 36 The map method on a Series accepts a function or dict-like object containing a mapping. We need to convert each value to lowercase using the str.lower Series method:
  • 37. 3. Replacing Multiple Values If you want to replace multiple values at once, you instead pass a list and then the substitute value: 37 To use a different replacement for each value, pass a list of substitutes:
  • 38. Copyright ©2011 Pearson Education 2-38 The argument passed can also be a dict:
  • 39. 4. Renaming Axis Indexes Like values in a Series, axis labels can be similarly transformed by a function or mapping of some form to produce new, differently labeled objects. 39
  • 40. 5. Discretization and Binning Continuous data is often discretized or otherwise separated into “bins” for analysis. Suppose you have data about a group of people in a study, and you want to group them into discrete age buckets ages = [20, 22, 25, 27, 21, 23, 37, 31, 61, 45, 41, 32] 40 Let’s divide these into bins of 18 to 25, 26 to 35, 36 to 60, and finally 61 and older. To do so, you have to use cut, a function in pandas:
  • 41. 41
  • 42. 6. Detecting and Filtering Outliers Filtering or transforming outliers is largely a matter of applying array operations. Consider a DataFrame with some normally distributed data: 42
  • 43. To select all rows having a value exceeding 3 or –3, you can use the any method on a boolean DataFrame: 43
  • 44. Another type of transformation for statistical modeling or machine learning applications is converting a categorical variable into a “dummy” or “indicator” matrix. 44 7.Computing Indicator/Dummy Variables
  • 45. NumPy is a Python package. It stands for 'Numerical Python'. It is a library consisting of multidimensional array objects and a collection of routines for processing of array. Numeric, the ancestor of NumPy, was developed by Jim Hugunin. Another package Numarray was also developed, having some additional functionalities. In 2005, Travis Oliphant created NumPy package by incorporating the features of Numarray into Numeric package. NumPy 45
  • 46. Operations using NumPy Using NumPy, a developer can perform the following operations − • Mathematical and logical operations on arrays. • Fourier transforms and routines for shape manipulation. • Operations related to linear algebra. • NumPy has in-built functions for linear algebra and random number generation. 46
  • 47. NumPy – A Replacement for MatLab NumPy is often used along with packages like SciPy (Scientific Python) and Mat−plotlib (plotting library). This combination is widely used as a replacement for MatLab, a popular platform for technical computing. However, Python alternative to MatLab is now seen as a more modern and complete programming language. It is open source, which is an added advantage of NumPy. 47
  • 48. The most important object defined in NumPy is an N-dimensional array type called ndarray. It describes the collection of items of the same type. Items in the collection can be accessed using a zero-based index. Every item in an ndarray takes the same size of block in the memory. Each element in ndarray is an object of data-type object (called dtype). The following diagram shows a relationship between ndarray, data type object (dtype) and array scalar type − NumPy package is imported using the following syntax − import numpy as np 48
  • 49. numpy.array(object, dtype = None, copy = True, order = None, subok = False, ndmin = 0) Sr.No. Parameter & Description 1 object Any object exposing the array interface method returns an array, or any (nested) sequence. 2 dtype Desired data type of array, optional 3 copy Optional. By default (true), the object is copied 4 order C (row major) or F (column major) or A (any) (default) 5 subok By default, returned array forced to be a base class array. If true, sub-classes passed through 6 ndmin Specifies minimum dimensions of resultant array The above constructor takes the following parameters − 49
  • 50. import numpy as np a = np.array([1,2,3]) print a Example: # more than one dimensions import numpy as np a = np.array([[1, 2], [3, 4]]) print a 50
  • 51. Example: inimum dimensions ort numpy as np np.array([1, 2, 3,4,5], ndmin = 2) t a # dtype parameter import numpy as np a = np.array([1, 2, 3], dtype = complex) print a 51
  • 52. NumPy - Array Attributes ndarray.shape: This array attribute returns a tuple consisting of array dimensions. It can also be used to resize the array. import numpy as np a = np.array([[1,2,3],[4,5,6]]) print a.shape # this resizes the ndarray mport numpy as np a = np.array([[1,2,3],[4,5,6]]) a.shape = (3,2) print a 52
  • 53. NumPy - Array Attributes import numpy as np a = np.array([[1,2,3],[4,5,6]]) b = a.reshape(3,2) print b ndarray.ndim: This array attribute returns the number of array dimensions. # an array of evenly spaced numbers import numpy as np a = np.arange(24) print a 53
  • 54. NumPy - Array Attributes # this is one dimensional array import numpy as np a = np.arange(24) a.ndim # now reshape it b = a.reshape(2,4,3) print b # b is having three dimensions numpy.itemsize: This array attribute returns the length of each element of array in bytes.
  • 55. NumPy - Array Attributes numpy.itemsize: This array attribute returns the length of each element of array in bytes. # dtype of array is int8 (1 byte) import numpy as np x = np.array([1,2,3,4,5], dtype = np.int8) print x.itemsize # dtype of array is now float32 (4 bytes) import numpy as np x = np.array([1,2,3,4,5], dtype = np.float32) print x.itemsize
  • 56. NumPy - Array Creation Routines A new ndarray object can be constructed by any of the following array creation routines or using a low-level ndarray constructor. numpy.empty: It creates an uninitialized array of specified shape and dtype. It uses the following constructor − numpy.empty(shape, dtype = float, order = 'C') Sr.No. Parameter & Description 1 Shape: Shape of an empty array in int or tuple of int 2 Dtype: Desired output data type. Optional 3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column- major array
  • 57. NumPy - Array Creation Routines A new ndarray object can be constructed by any of the following array creation routines or using a low-level ndarray constructor. numpy.empty: It creates an uninitialized array of specified shape and dtype. It uses the following constructor − numpy.empty(shape, dtype = float, order = 'C') Sr.No. Parameter & Description 1 Shape: Shape of an empty array in int or tuple of int 2 Dtype: Desired output data type. Optional 3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column- major array
  • 58. NumPy - Array Creation Routines ort numpy as np np.empty([3,2], dtype = int) t x numpy.zeros: Returns a new array of specified size, filled with zeros. numpy.zeros(shape, dtype = float, order = 'C') Sr.No. Parameter & Description 1 Shape: Shape of an empty array in int or sequence of int 2 Dtype: Desired output data type. Optional 3 Order: 'C' for C-style row-major array, 'F' for FORTRAN style column-major array
  • 59. NumPy - Array Creation Routines # array of five zeros. Default dtype is float import numpy as np x = np.zeros(5) print x import numpy as np x = np.zeros((5,), dtype = np.int) print x # custom type import numpy as np x = np.zeros((2,2), dtype = [('x', 'i4'), ('y', 'i4')]) print x numpy.ones: Returns a new array of specified size and type, filled with ones. numpy.ones(shape, dtype = None, order = 'C')
  • 60. NumPy - Array From Existing Data numpy.as array: This function is similar to numpy.array except for the fact that it has fewer parameters. This routine is useful for converting Python sequence into ndarray. numpy.asarray(a, dtype = None, order = None) Sr.No. Parameter & Description 1 a Input data in any form such as list, list of tuples, tuples, tuple of tuples or tuple of lists 2 dtype By default, the data type of input data is applied to the resultant ndarray 3 order C (row major) or F (column major). C is default
  • 61. 61
  • 80. • Matplotlib is an open-source drawing library that supports various drawing types • You can generate plots, histograms, bar charts, and other types of charts with just a few lines of code • It’s often used in web application servers, shells, and Python scripts What is Matplotlib? 80
  • 81. Pyplot is a Matplotlib module that provides simple functions for adding plot elements, such as lines, images, text, etc. to the axes in the current figure. 81
  • 82. Matplotlib Subplots You can use the subplot() method to add more than one plot in a figure. Syntax: plt.subplots(nrows, ncols, index) The three-integer arguments specify the number of rows and columns and the index of the subplot grid. 82
  • 83. 83
  • 84. 84
  • 85. 85