SlideShare a Scribd company logo
Introduction to Data Analysis
techniques using Python
First steps into insight discovery using Python and specialized libraries
Alex Chalini - sentoul@Hotmail.com
About me
• Computer Systems Engineer, Master of Computer Science (…)
• Actively working in Business Solutions development since
2001
• My areas of specialty are Business Intelligence, Data Analysis,
Data Visualization, DB modeling and optimization.
• I am also interested in Data Science path for engineering.
2
Alex Chalini
Agenda
• What is Data Analysis?
• Python Libraries for Data Analysis and Data Science
• Hands-on data analysis workflow using Python
• Statistical Analysis & ML overview
• Big Data & Data Analytics working together
• Applicatons in Pharma industry
3
Question:
The process of systematically applying
techniques to evaluate data is known as ?
A. Data Munging
B. Data Analysis
C. Data Science
D. Data Bases
0A B C D
4
Data Analysis:
•What is it?
•Apply logical
techniques to
•Describe, condense,
recap and evaluate
Data and
•Illustrate Information
•Goals of Data Analysis:
1. Discover useful
information
2. Provide insights
3. Suggest conclusions
4. Support Decision
Making
5
Phyton Data Analysis Basics
• Series
• DataFrame
• Creating a DataFrame from a dict
• Select columns, Select rows with Boolean indexing
6
Essential Concepts
• A Series is a named Python list (dict with list as value).
{ ‘grades’ : [50,90,100,45] }
• A DataFrame is a dictionary of Series (dict of series):
{ { ‘names’ : [‘bob’,’ken’,’art’,’joe’]}
{ ‘grades’ : [50,90,100,45] }
}
7
Python Libraries for Data Analysis and Data
Science
Many popular Python toolboxes/libraries:
• NumPy
• SciPy
• Pandas
• SciKit-Learn
Visualization libraries
• matplotlib
• Seaborn
8
All these libraries are
free to download and
use
AnalyticsWorkflow
9
Overview of Python Libraries for Data
Scientists
Reading Data; Selecting and Filtering the Data; Data manipulation,
sorting, grouping, rearranging
Plotting the data
Descriptive statistics
Inferential statistics
NumPy:
 introduces objects for multidimensional arrays and matrices, as well as
functions that allow to easily perform advanced mathematical and statistical
operations on those objects
 provides vectorization of mathematical operations on arrays and matrices
which significantly improves the performance
 many other python libraries are built on NumPy
10
Link: http://www.numpy.org/
SciPy:
 collection of algorithms for linear algebra, differential equations, numerical
integration, optimization, statistics and more
 part of SciPy Stack
 built on NumPy
11
Link: https://www.scipy.org/scipylib/
• It Provides built-in data structures which simplify the manipulation and analysis of data sets.
• Pandas is easy to use and powerful, but “with great power comes great responsibility”
• adds data structures and tools designed to work with table-like data (similar to Series and Data
Frames in R)
• provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc.
• allows handling missing data
I cannot teach you all things Pandas, we must focus on how it works, so you can figure out the rest
on your own.
12
Link: http://pandas.pydata.org/
Pandas is Python package for data analysis.
Link: http://scikit-learn.org/
SciKit-Learn:
 provides machine learning algorithms: classification, regression, clustering,
model validation etc.
 built on NumPy, SciPy and matplotlib
13
Link: https://www.tensorflow.org/
TensorFlow™ is an open source software library for high performance numerical
computation.
It comes with strong support for machine learning and deep learning and the
flexible numerical computation core is used across many other scientific domains
14
matplotlib:
 python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
 a set of functionalities similar to those of MATLAB
 line plots, scatter plots, barcharts, histograms, pie charts etc.
 relatively low-level; some effort needed to create advanced visualization
Link: https://matplotlib.org/
15
Seaborn:
 based on matplotlib
 provides high level interface for drawing attractive statistical graphics
 Similar (in style) to the popular ggplot2 library in R
Link: https://seaborn.pydata.org/
16
Hands-on workflow using Python
17
In [ ]:
Loading Python Libraries
18
#Import Python Libraries
import numpy as np
import scipy as sp
import pandas as pd
import matplotlib as mpl
import seaborn as sns
In [ ]:
Reading data using pandas
19
#Read csv file
df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv")
There is a number of pandas commands to read other data formats:
pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA'])
pd.read_stata('myfile.dta')
pd.read_sas('myfile.sas7bdat')
pd.read_hdf('myfile.h5','df')
Note: The above command has many optional arguments to fine-tune the data import process.
In [3]:
Exploring data frames
20
#List first 5 records
df.head()
Out[3]:
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be
assigned to your column if column
has mixed types (numbers and
strings).
int64 int Numeric characters. 64 refers to
the memory allocated to hold this
character.
float64 float Numeric characters with decimals.
If a column contains numbers and
NaNs(see below), pandas will
default to float64, in case your
missing value has a decimal.
datetime64, timedelta[ns] N/A (but see the datetime module
in Python’s standard library)
Values meant to hold time data.
Look into these for time series
experiments.
21
In [4]:
Data Frame data types
22
#Check a particular column type
df['salary'].dtype
Out[4]: dtype('int64')
In [5]: #Check types for all the columns
df.dtypes
Out[4]: rank
discipline
phd
service
sex
salary
dtype: object
object
object
int64
int64
object
int64
Data Frames attributes
23
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
size number of elements
shape return a tuple representing the dimensionality
values numpy representation of the data
Hands-on exercises
24
 Find how many records this data frame has;
 How many elements are there?
 What are the column names?
 What types of columns we have in this data frame?
In [5]: df.shape
Out[5]: (4, 3)
>>> df.count()
Person 4
Age 4
Single 5
dtype: int64
list(my_dataframe.columns.values)
Also you can simply use:
list(my_dataframe)
>>> df2.dtypes
Series or DataFrame?
Match the code to the
result. One result is a Series,
the other a DataFrame
1.df[‘Quarter’]
2.df[ [‘Quarter’] ]
A. Series B. Data Frame
0A B
25
Data Frames methods
26
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
mean(), median() return mean/median values for all numeric columns
std() standard deviation
sample([n]) returns a random sample of the data frame
dropna() drop all the records with missing values
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
Selecting a column in a Data Frame
Method 1: Subset the data frame using column name:
df[‘gender']
Method 2: Use the column name as an attribute:
df.gender
Note: there is an attribute rank for pandas data frames, so to select a column with a name
"rank" we should use method 1.
27
Data Frames groupby method
28
Using "group by" method we can:
• Split the data into groups based on some criteria
• Calculate statistics (or apply a function) to each group
• Similar to dplyr() function in R
In [ ]: #Group data using rank
df_rank = df.groupby(['rank'])
In [ ]: #Calculate mean value for each numeric column per each group
df_rank.mean()
Data Frames groupby method
29
Once groupby object is create we can calculate various statistics for each group:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby('rank')[['salary']].mean()
Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object.
When double brackets are used the output is a Data Frame
Data Frames groupby method
30
groupby performance notes:
- no grouping/splitting occurs until it's needed. Creating the groupby object
only verifies that you have passed a valid mapping
- by default the group keys are sorted during the groupby operation. You may
want to pass sort=False for potential speedup:
In [ ]: #Calculate mean salary for each professor rank:
df.groupby(['rank'], sort=False)[['salary']].mean()
Data Frame: filtering
31
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the salary
value is greater than $120K:
In [ ]: #Calculate mean salary for each professor rank:
df_sub = df[ df['salary'] > 120000 ]
In [ ]: #Select only those rows that contain female professors:
df_f = df[ df['sex'] == 'Female' ]
Any Boolean operator can be used to subset the data:
> greater; >= greater or equal;
< less; <= less or equal;
== equal; != not equal;
Boolean filtering
Which rows are included in this
Boolean index?
df[ df[‘Sold’] < 110 ]
A. 0, 1, 2
B. 1, 2, 3
C. 0, 1
D. 0, 3
0A B C D
32
Data Frames: Slicing
33
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
Rows and columns can be selected by their position or label
Data Frames: Slicing
34
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
In [ ]: #Select column salary:
df['salary']
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
In [ ]: #Select column salary:
df[['rank','salary']]
Data Frames: Selecting rows
35
If we need to select a range of rows, we can specify the range using ":"
In [ ]: #Select rows by their position:
df[10:20]
Notice that the first row has a position 0, and the last value in the range is omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
Data Frames: method loc
36
If we need to select a range of rows, using their labels we can use method loc:
In [ ]: #Select rows by their labels:
df_sub.loc[10:20,['rank','sex','salary']]
Out[ ]:
Data Frames: method iloc
37
If we need to select a range of rows and/or columns, using their positions we can
use method iloc:
In [ ]: #Select rows by their labels:
df_sub.iloc[10:20,[0, 3, 4, 5]]
Out[ ]:
Data Frames: method iloc (summary)
38
df.iloc[0] # First row of a data frame
df.iloc[i] #(i+1)th row
df.iloc[-1] # Last row
df.iloc[:, 0] # First column
df.iloc[:, -1] # Last column
df.iloc[0:7] #First 7 rows
df.iloc[:, 0:2] #First 2 columns
df.iloc[1:3, 0:2] #Second through third rows and first 2 columns
df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
Data Frames: Sorting
39
We can sort the data by a value in the column. By default the sorting will occur in
ascending order and a new data frame is return.
In [ ]: # Create a new data frame from the original sorted by the column Salary
df_sorted = df.sort_values( by ='service')
df_sorted.head()
Out[ ]:
Data Frames: Sorting
40
We can sort the data using 2 or more columns:
In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False])
df_sorted.head(10)
Out[ ]:
Missing Values
41
Missing values are marked as NaN
In [ ]: # Read a dataset with missing values
flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv")
In [ ]: # Select the rows that have at least one missing value
flights[flights.isnull().any(axis=1)].head()
Out[ ]:
Missing Values
42
There are a number of methods to deal with missing values in the data frame:
df.method() description
dropna() Drop missing observations
dropna(how='all') Drop observations where all cells is NA
dropna(axis=1, how='all') Drop column if all the values are missing
dropna(thresh = 5) Drop rows that contain less than 5 non-missing values
fillna(0) Replace missing values with zeros
isnull() returns True if the value is missing
notnull() Returns True for non-missing values
Missing Values
43
• When summing the data, missing values will be treated as zero
• If all values are missing, the sum will be equal to NaN
• cumsum() and cumprod() methods ignore missing values but preserve them in
the resulting arrays
• Missing values in GroupBy method are excluded (just like in R)
• Many descriptive statistics methods have skipna option to control if missing
data should be excluded . This value is set to True by default (unlike R)
Aggregation Functions in Pandas
44
Aggregation - computing a summary statistic about each group, i.e.
• compute group sums or means
• compute group sizes/counts
Common aggregation functions:
min, max
count, sum, prod
mean, median, mode, mad
std, var
Aggregation Functions in Pandas
45
agg() method are useful when multiple statistics are computed per column:
In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max'])
Out[ ]:
Basic Descriptive Statistics
46
df.method() description
describe Basic statistics (count, mean, std, min, quantiles, max)
min, max Minimum and maximum values
mean, median, mode Arithmetic average, median and mode
var, std Variance and standard deviation
sem Standard error of mean
skew Sample skewness
kurt kurtosis
Graphics to explore the data
47
To show graphs within Python notebook include inline directive:
In [ ]: %matplotlib inline
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
Graphics
48
description
distplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the
data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
49
Statistical Analysis & ML overview
50
statsmodel and scikit-learn - both have a number of function for statistical analysis
The first one is mostly used for regular analysis using R style formulas, while scikit-learn and Tensorflow are more
tailored for Machine Learning.
statsmodels:
• linear regressions
• ANOVA tests
• hypothesis testings
• many more ...
scikit-learn:
• kmeans
• support vector machines
• random forests
• many more ...
Tensorflow:
• Image Recognition
• Neural Networks
• Linear Models
• TensorFlow Wide & Deep Learning
• etc...
51
Big Data & Data Analytics working together
Big Data & Data Analytics working together
WORKING WITH BIG DATA: MAP-REDUCE
• When working with large datasets, it’s often useful to utilize MapReduce.
MapReduce is a method when working with big data which allows you to
first map the data using a particular attribute, filter or grouping and then
reduce those using a transformation or aggregation mechanism. For
example, if I had a collection of cats, I could first map them by what color
they are and then reduce by summing those groups. At the end of the
MapReduce process, I would have a list of all the cat colors and the sum of
the cats in each of those color groupings.
• Almost every data science library has some MapReduce functionality built
in. There are also numerous larger libraries you can use to manage the data
and MapReduce over a series of computers (or a cluster / grouping of
computers). Python can speak to these services and software and extract
the results for further reporting, visualization or alerting.
52
Big Data & Data Analytics working together
Hadoop
• If the most popular libraries for MapReduce with large datasets is Apache’s Hadoop. Hadoop
uses cluster computing to allow for faster data processing of large datasets. There are many
Python libraries you can use to send your data or jobs to Hadoop and which one you choose
should be a mixture of what’s easiest and most simple to set up with your infastructure, and
also what seems like the most clear library for your use case.
Spark
• If you have large data which might work better in streaming form (real-time data, log data,
API data), then Apache’s Spark is a great tool. PySpark, the Python Spark API, allows you to
quickly get up and running and start mapping and reducing your dataset. It’s also incredibly
popular with machine learning problems, as it has some built-in algorithms.
• There are several other large scale data and job libraries you can use with Python, but for now
we can move along to looking at data with Python.
53
Big Data & Data Analytics working together
54
Apache Spark is written in Scala programming language. To
support Python with Spark, Apache Spark community released
a tool, PySpark. Using PySpark, you can work with RDDs in
Python programming language also.
BigQuery is Google's serverless, highly scalable, low cost enterprise data
warehouse.
BigQuery allows organizations to capture and analyze data in real-time
using its powerful streaming ingestion capability so that your insights are
always current.
Industries using Real-Time Big Data-Analytics
• e-Commerce
• Social Networks
• Healthcare
• Fraud Detection
55
OPTIMIZE CUSTOMER SERVICE PROCESS IN A FLOW OF
CONTINUOUS DATA, MAKING LIFE SAVING DECISIONS IN
A SAFE ENVIRONMENT TO RUN THE BUSINESS.
56
Applications in Pharma Industry
Applications in the Pharma Industry
57
58
59
Thank you!
60

More Related Content

What's hot (20)

PPT
Python
Chetan Khanzode
 
PDF
Map/Reduce intro
CARLOS III UNIVERSITY OF MADRID
 
PDF
Database Technologies
Michel de Goede
 
PPTX
Hadoop Distributed File System
Rutvik Bapat
 
PDF
Steps to build SAP SuccessFactors support model
HR Path
 
PDF
Data Visualisation & Analytics with Tableau (Beginner) - by Maria Koumandraki
Outreach Digital
 
KEY
Testing Hadoop jobs with MRUnit
Eric Wendelin
 
PPTX
Hadoop File system (HDFS)
Prashant Gupta
 
PPTX
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Simplilearn
 
PPTX
Apache pig
Jigar Parekh
 
DOCX
Python interview questions and answers
RojaPriya
 
PDF
Hadoop Overview & Architecture
EMC
 
PDF
Project management metrics, kpi is and dashboards
Dadunoor Kamati
 
PDF
An Overview of SAP S4/HANA
Debajit Banerjee
 
PPTX
Sap Business Objects solutioning Framework architecture
Sandeep Sharma IIMK Smart City,IoT,Bigdata,Cloud,BI,DW
 
PDF
Impact of Trading Community Architecture (TCA) on Oracle Receivables
Rhapsody Technologies, Inc.
 
PPTX
Tableau slideshare
Sakshi Jain
 
PDF
Python Generators
Akshar Raaj
 
PPTX
sap hana|sap hana database| Introduction to sap hana
James L. Lee
 
PDF
Introduction to Pandas and Time Series Analysis [PyCon DE]
Alexander Hendorf
 
Database Technologies
Michel de Goede
 
Hadoop Distributed File System
Rutvik Bapat
 
Steps to build SAP SuccessFactors support model
HR Path
 
Data Visualisation & Analytics with Tableau (Beginner) - by Maria Koumandraki
Outreach Digital
 
Testing Hadoop jobs with MRUnit
Eric Wendelin
 
Hadoop File system (HDFS)
Prashant Gupta
 
Hadoop Architecture | HDFS Architecture | Hadoop Architecture Tutorial | HDFS...
Simplilearn
 
Apache pig
Jigar Parekh
 
Python interview questions and answers
RojaPriya
 
Hadoop Overview & Architecture
EMC
 
Project management metrics, kpi is and dashboards
Dadunoor Kamati
 
An Overview of SAP S4/HANA
Debajit Banerjee
 
Sap Business Objects solutioning Framework architecture
Sandeep Sharma IIMK Smart City,IoT,Bigdata,Cloud,BI,DW
 
Impact of Trading Community Architecture (TCA) on Oracle Receivables
Rhapsody Technologies, Inc.
 
Tableau slideshare
Sakshi Jain
 
Python Generators
Akshar Raaj
 
sap hana|sap hana database| Introduction to sap hana
James L. Lee
 
Introduction to Pandas and Time Series Analysis [PyCon DE]
Alexander Hendorf
 

Similar to Meetup Junio Data Analysis with python 2018 (20)

PPTX
Pythonggggg. Ghhhjj-for-Data-Analysis.pptx
sahilurrahemankhan
 
PPTX
Data Visualization_pandas in hadoop.pptx
Rahul Borate
 
PPTX
Python-for-Data-Analysis.pptx
ParveenShaik21
 
PPTX
python for data anal gh i o fytysis creation.pptx
Vinod Deenathayalan
 
PPTX
Python for data analysis
Savitribai Phule Pune University
 
PPTX
More on Pandas.pptx
VirajPathania1
 
PPTX
PPT on Data Science Using Python
NishantKumar1179
 
PPTX
interenship.pptx
Naveen316549
 
PDF
Python-for-Data-Analysis.pdf
ssuser598883
 
PPTX
Python-for-Data-Analysis.pptx
tangadhurai
 
PPTX
Python-for-Data-Analysis.pptx
Sandeep Singh
 
PDF
Python for Data Analysis.pdf
JulioRecaldeLara1
 
PPTX
Lecture 9.pptx
MathewJohnSinoCruz
 
PPTX
Lecture 1 Pandas Basics.pptx machine learning
my6305874
 
PPTX
Lecture 3 intro2data
Johnson Ubah
 
PPTX
Unit 3_Numpy_Vsp.pptx
prakashvs7
 
PPTX
Unit 3_Numpy_VP.pptx
vishnupriyapm4
 
PPTX
python-pandas-For-Data-Analysis-Manipulate.pptx
PLOKESH8
 
PDF
Python for Data Analysis Data Wrangling with Pandas NumPy and IPython Wes Mck...
arianmutchpp
 
PDF
Lesson 2 data preprocessing
AbdurRazzaqe1
 
Pythonggggg. Ghhhjj-for-Data-Analysis.pptx
sahilurrahemankhan
 
Data Visualization_pandas in hadoop.pptx
Rahul Borate
 
Python-for-Data-Analysis.pptx
ParveenShaik21
 
python for data anal gh i o fytysis creation.pptx
Vinod Deenathayalan
 
Python for data analysis
Savitribai Phule Pune University
 
More on Pandas.pptx
VirajPathania1
 
PPT on Data Science Using Python
NishantKumar1179
 
interenship.pptx
Naveen316549
 
Python-for-Data-Analysis.pdf
ssuser598883
 
Python-for-Data-Analysis.pptx
tangadhurai
 
Python-for-Data-Analysis.pptx
Sandeep Singh
 
Python for Data Analysis.pdf
JulioRecaldeLara1
 
Lecture 9.pptx
MathewJohnSinoCruz
 
Lecture 1 Pandas Basics.pptx machine learning
my6305874
 
Lecture 3 intro2data
Johnson Ubah
 
Unit 3_Numpy_Vsp.pptx
prakashvs7
 
Unit 3_Numpy_VP.pptx
vishnupriyapm4
 
python-pandas-For-Data-Analysis-Manipulate.pptx
PLOKESH8
 
Python for Data Analysis Data Wrangling with Pandas NumPy and IPython Wes Mck...
arianmutchpp
 
Lesson 2 data preprocessing
AbdurRazzaqe1
 
Ad

More from DataLab Community (11)

PPTX
Meetup Julio Algoritmos Genéticos
DataLab Community
 
PDF
Meetup Junio Apache Spark Fundamentals
DataLab Community
 
PDF
Procesar e interpretar señales biológicas para hacer predicción de movimiento...
DataLab Community
 
PDF
Metodos de kernel en machine learning by MC Luis Ricardo Peña Llamas
DataLab Community
 
PDF
Curse of dimensionality by MC Ivan Alejando Garcia
DataLab Community
 
PDF
Tensor models and other dreams by PhD Andres Mendez-Vazquez
DataLab Community
 
PDF
Quiénes somos - DataLab Community
DataLab Community
 
PDF
Profesiones de la ciencia de datos
DataLab Community
 
PDF
El arte de la Ciencia de Datos
DataLab Community
 
PPTX
Presentación de DataLab Community
DataLab Community
 
PPTX
De qué hablamos cuando hablamos de Data Science
DataLab Community
 
Meetup Julio Algoritmos Genéticos
DataLab Community
 
Meetup Junio Apache Spark Fundamentals
DataLab Community
 
Procesar e interpretar señales biológicas para hacer predicción de movimiento...
DataLab Community
 
Metodos de kernel en machine learning by MC Luis Ricardo Peña Llamas
DataLab Community
 
Curse of dimensionality by MC Ivan Alejando Garcia
DataLab Community
 
Tensor models and other dreams by PhD Andres Mendez-Vazquez
DataLab Community
 
Quiénes somos - DataLab Community
DataLab Community
 
Profesiones de la ciencia de datos
DataLab Community
 
El arte de la Ciencia de Datos
DataLab Community
 
Presentación de DataLab Community
DataLab Community
 
De qué hablamos cuando hablamos de Data Science
DataLab Community
 
Ad

Recently uploaded (20)

PPTX
Discrete Logarithm Problem in Cryptography (1).pptx
meshablinx38
 
PPTX
Monitoring Improvement ( Pomalaa Branch).pptx
fajarkunee
 
PDF
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
PDF
SaleServicereport and SaleServicereport
2251330007
 
PPTX
Artificial intelligence Presentation1.pptx
SaritaMahajan5
 
PPTX
SHREYAS25 INTERN-I,II,III PPT (1).pptx pre
swapnilherage
 
PPTX
03_Ariane BERCKMOES_Ethias.pptx_AIBarometer_release_event
FinTech Belgium
 
PPTX
Data anlytics Hospitals Research India.pptx
SayantanChakravorty2
 
PPTX
big data eco system fundamentals of data science
arivukarasi
 
PPTX
BinarySearchTree in datastructures in detail
kichokuttu
 
PPTX
Presentation.pptx hhgihyugyygyijguuffddfffffff
abhiruppal2007
 
PDF
Loading Data into Snowflake (Bulk & Stream)
Accentfuture
 
PPTX
Generative AI Boost Data Governance and Quality- Tejasvi Addagada
Tejasvi Addagada
 
PDF
Blood pressure (3).pdfbdbsbsbhshshshhdhdhshshs
hernandezemma379
 
PPTX
thid ppt defines the ich guridlens and gives the information about the ICH gu...
shaistabegum14
 
PPTX
Krezentios memories in college data.pptx
notknown9
 
PDF
Orchestrating Data Workloads With Airflow.pdf
ssuserae5511
 
PPTX
How to Add Columns and Rows in an R Data Frame
subhashenia
 
PPTX
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 
Discrete Logarithm Problem in Cryptography (1).pptx
meshablinx38
 
Monitoring Improvement ( Pomalaa Branch).pptx
fajarkunee
 
Business implication of Artificial Intelligence.pdf
VishalChugh12
 
SaleServicereport and SaleServicereport
2251330007
 
Artificial intelligence Presentation1.pptx
SaritaMahajan5
 
SHREYAS25 INTERN-I,II,III PPT (1).pptx pre
swapnilherage
 
03_Ariane BERCKMOES_Ethias.pptx_AIBarometer_release_event
FinTech Belgium
 
Data anlytics Hospitals Research India.pptx
SayantanChakravorty2
 
big data eco system fundamentals of data science
arivukarasi
 
BinarySearchTree in datastructures in detail
kichokuttu
 
Presentation.pptx hhgihyugyygyijguuffddfffffff
abhiruppal2007
 
Loading Data into Snowflake (Bulk & Stream)
Accentfuture
 
Generative AI Boost Data Governance and Quality- Tejasvi Addagada
Tejasvi Addagada
 
Blood pressure (3).pdfbdbsbsbhshshshhdhdhshshs
hernandezemma379
 
thid ppt defines the ich guridlens and gives the information about the ICH gu...
shaistabegum14
 
Krezentios memories in college data.pptx
notknown9
 
Orchestrating Data Workloads With Airflow.pdf
ssuserae5511
 
How to Add Columns and Rows in an R Data Frame
subhashenia
 
Feb 2021 Ransomware Recovery presentation.pptx
enginsayin1
 

Meetup Junio Data Analysis with python 2018

  • 1. Introduction to Data Analysis techniques using Python First steps into insight discovery using Python and specialized libraries Alex Chalini - [email protected]
  • 2. About me • Computer Systems Engineer, Master of Computer Science (…) • Actively working in Business Solutions development since 2001 • My areas of specialty are Business Intelligence, Data Analysis, Data Visualization, DB modeling and optimization. • I am also interested in Data Science path for engineering. 2 Alex Chalini
  • 3. Agenda • What is Data Analysis? • Python Libraries for Data Analysis and Data Science • Hands-on data analysis workflow using Python • Statistical Analysis & ML overview • Big Data & Data Analytics working together • Applicatons in Pharma industry 3
  • 4. Question: The process of systematically applying techniques to evaluate data is known as ? A. Data Munging B. Data Analysis C. Data Science D. Data Bases 0A B C D 4
  • 5. Data Analysis: •What is it? •Apply logical techniques to •Describe, condense, recap and evaluate Data and •Illustrate Information •Goals of Data Analysis: 1. Discover useful information 2. Provide insights 3. Suggest conclusions 4. Support Decision Making 5
  • 6. Phyton Data Analysis Basics • Series • DataFrame • Creating a DataFrame from a dict • Select columns, Select rows with Boolean indexing 6
  • 7. Essential Concepts • A Series is a named Python list (dict with list as value). { ‘grades’ : [50,90,100,45] } • A DataFrame is a dictionary of Series (dict of series): { { ‘names’ : [‘bob’,’ken’,’art’,’joe’]} { ‘grades’ : [50,90,100,45] } } 7
  • 8. Python Libraries for Data Analysis and Data Science Many popular Python toolboxes/libraries: • NumPy • SciPy • Pandas • SciKit-Learn Visualization libraries • matplotlib • Seaborn 8 All these libraries are free to download and use
  • 9. AnalyticsWorkflow 9 Overview of Python Libraries for Data Scientists Reading Data; Selecting and Filtering the Data; Data manipulation, sorting, grouping, rearranging Plotting the data Descriptive statistics Inferential statistics
  • 10. NumPy:  introduces objects for multidimensional arrays and matrices, as well as functions that allow to easily perform advanced mathematical and statistical operations on those objects  provides vectorization of mathematical operations on arrays and matrices which significantly improves the performance  many other python libraries are built on NumPy 10 Link: http://www.numpy.org/
  • 11. SciPy:  collection of algorithms for linear algebra, differential equations, numerical integration, optimization, statistics and more  part of SciPy Stack  built on NumPy 11 Link: https://www.scipy.org/scipylib/
  • 12. • It Provides built-in data structures which simplify the manipulation and analysis of data sets. • Pandas is easy to use and powerful, but “with great power comes great responsibility” • adds data structures and tools designed to work with table-like data (similar to Series and Data Frames in R) • provides tools for data manipulation: reshaping, merging, sorting, slicing, aggregation etc. • allows handling missing data I cannot teach you all things Pandas, we must focus on how it works, so you can figure out the rest on your own. 12 Link: http://pandas.pydata.org/ Pandas is Python package for data analysis.
  • 13. Link: http://scikit-learn.org/ SciKit-Learn:  provides machine learning algorithms: classification, regression, clustering, model validation etc.  built on NumPy, SciPy and matplotlib 13
  • 14. Link: https://www.tensorflow.org/ TensorFlow™ is an open source software library for high performance numerical computation. It comes with strong support for machine learning and deep learning and the flexible numerical computation core is used across many other scientific domains 14
  • 15. matplotlib:  python 2D plotting library which produces publication quality figures in a variety of hardcopy formats  a set of functionalities similar to those of MATLAB  line plots, scatter plots, barcharts, histograms, pie charts etc.  relatively low-level; some effort needed to create advanced visualization Link: https://matplotlib.org/ 15
  • 16. Seaborn:  based on matplotlib  provides high level interface for drawing attractive statistical graphics  Similar (in style) to the popular ggplot2 library in R Link: https://seaborn.pydata.org/ 16
  • 18. In [ ]: Loading Python Libraries 18 #Import Python Libraries import numpy as np import scipy as sp import pandas as pd import matplotlib as mpl import seaborn as sns
  • 19. In [ ]: Reading data using pandas 19 #Read csv file df = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/Salaries.csv") There is a number of pandas commands to read other data formats: pd.read_excel('myfile.xlsx',sheet_name='Sheet1', index_col=None, na_values=['NA']) pd.read_stata('myfile.dta') pd.read_sas('myfile.sas7bdat') pd.read_hdf('myfile.h5','df') Note: The above command has many optional arguments to fine-tune the data import process.
  • 20. In [3]: Exploring data frames 20 #List first 5 records df.head() Out[3]:
  • 21. Data Frame data types Pandas Type Native Python Type Description object string The most general dtype. Will be assigned to your column if column has mixed types (numbers and strings). int64 int Numeric characters. 64 refers to the memory allocated to hold this character. float64 float Numeric characters with decimals. If a column contains numbers and NaNs(see below), pandas will default to float64, in case your missing value has a decimal. datetime64, timedelta[ns] N/A (but see the datetime module in Python’s standard library) Values meant to hold time data. Look into these for time series experiments. 21
  • 22. In [4]: Data Frame data types 22 #Check a particular column type df['salary'].dtype Out[4]: dtype('int64') In [5]: #Check types for all the columns df.dtypes Out[4]: rank discipline phd service sex salary dtype: object object object int64 int64 object int64
  • 23. Data Frames attributes 23 Python objects have attributes and methods. df.attribute description dtypes list the types of the columns columns list the column names axes list the row labels and column names ndim number of dimensions size number of elements shape return a tuple representing the dimensionality values numpy representation of the data
  • 24. Hands-on exercises 24  Find how many records this data frame has;  How many elements are there?  What are the column names?  What types of columns we have in this data frame? In [5]: df.shape Out[5]: (4, 3) >>> df.count() Person 4 Age 4 Single 5 dtype: int64 list(my_dataframe.columns.values) Also you can simply use: list(my_dataframe) >>> df2.dtypes
  • 25. Series or DataFrame? Match the code to the result. One result is a Series, the other a DataFrame 1.df[‘Quarter’] 2.df[ [‘Quarter’] ] A. Series B. Data Frame 0A B 25
  • 26. Data Frames methods 26 df.method() description head( [n] ), tail( [n] ) first/last n rows describe() generate descriptive statistics (for numeric columns only) max(), min() return max/min values for all numeric columns mean(), median() return mean/median values for all numeric columns std() standard deviation sample([n]) returns a random sample of the data frame dropna() drop all the records with missing values Unlike attributes, python methods have parenthesis. All attributes and methods can be listed with a dir() function: dir(df)
  • 27. Selecting a column in a Data Frame Method 1: Subset the data frame using column name: df[‘gender'] Method 2: Use the column name as an attribute: df.gender Note: there is an attribute rank for pandas data frames, so to select a column with a name "rank" we should use method 1. 27
  • 28. Data Frames groupby method 28 Using "group by" method we can: • Split the data into groups based on some criteria • Calculate statistics (or apply a function) to each group • Similar to dplyr() function in R In [ ]: #Group data using rank df_rank = df.groupby(['rank']) In [ ]: #Calculate mean value for each numeric column per each group df_rank.mean()
  • 29. Data Frames groupby method 29 Once groupby object is create we can calculate various statistics for each group: In [ ]: #Calculate mean salary for each professor rank: df.groupby('rank')[['salary']].mean() Note: If single brackets are used to specify the column (e.g. salary), then the output is Pandas Series object. When double brackets are used the output is a Data Frame
  • 30. Data Frames groupby method 30 groupby performance notes: - no grouping/splitting occurs until it's needed. Creating the groupby object only verifies that you have passed a valid mapping - by default the group keys are sorted during the groupby operation. You may want to pass sort=False for potential speedup: In [ ]: #Calculate mean salary for each professor rank: df.groupby(['rank'], sort=False)[['salary']].mean()
  • 31. Data Frame: filtering 31 To subset the data we can apply Boolean indexing. This indexing is commonly known as a filter. For example if we want to subset the rows in which the salary value is greater than $120K: In [ ]: #Calculate mean salary for each professor rank: df_sub = df[ df['salary'] > 120000 ] In [ ]: #Select only those rows that contain female professors: df_f = df[ df['sex'] == 'Female' ] Any Boolean operator can be used to subset the data: > greater; >= greater or equal; < less; <= less or equal; == equal; != not equal;
  • 32. Boolean filtering Which rows are included in this Boolean index? df[ df[‘Sold’] < 110 ] A. 0, 1, 2 B. 1, 2, 3 C. 0, 1 D. 0, 3 0A B C D 32
  • 33. Data Frames: Slicing 33 There are a number of ways to subset the Data Frame: • one or more columns • one or more rows • a subset of rows and columns Rows and columns can be selected by their position or label
  • 34. Data Frames: Slicing 34 When selecting one column, it is possible to use single set of brackets, but the resulting object will be a Series (not a DataFrame): In [ ]: #Select column salary: df['salary'] When we need to select more than one column and/or make the output to be a DataFrame, we should use double brackets: In [ ]: #Select column salary: df[['rank','salary']]
  • 35. Data Frames: Selecting rows 35 If we need to select a range of rows, we can specify the range using ":" In [ ]: #Select rows by their position: df[10:20] Notice that the first row has a position 0, and the last value in the range is omitted: So for 0:10 range the first 10 rows are returned with the positions starting with 0 and ending with 9
  • 36. Data Frames: method loc 36 If we need to select a range of rows, using their labels we can use method loc: In [ ]: #Select rows by their labels: df_sub.loc[10:20,['rank','sex','salary']] Out[ ]:
  • 37. Data Frames: method iloc 37 If we need to select a range of rows and/or columns, using their positions we can use method iloc: In [ ]: #Select rows by their labels: df_sub.iloc[10:20,[0, 3, 4, 5]] Out[ ]:
  • 38. Data Frames: method iloc (summary) 38 df.iloc[0] # First row of a data frame df.iloc[i] #(i+1)th row df.iloc[-1] # Last row df.iloc[:, 0] # First column df.iloc[:, -1] # Last column df.iloc[0:7] #First 7 rows df.iloc[:, 0:2] #First 2 columns df.iloc[1:3, 0:2] #Second through third rows and first 2 columns df.iloc[[0,5], [1,3]] #1st and 6th rows and 2nd and 4th columns
  • 39. Data Frames: Sorting 39 We can sort the data by a value in the column. By default the sorting will occur in ascending order and a new data frame is return. In [ ]: # Create a new data frame from the original sorted by the column Salary df_sorted = df.sort_values( by ='service') df_sorted.head() Out[ ]:
  • 40. Data Frames: Sorting 40 We can sort the data using 2 or more columns: In [ ]: df_sorted = df.sort_values( by =['service', 'salary'], ascending = [True, False]) df_sorted.head(10) Out[ ]:
  • 41. Missing Values 41 Missing values are marked as NaN In [ ]: # Read a dataset with missing values flights = pd.read_csv("http://rcs.bu.edu/examples/python/data_analysis/flights.csv") In [ ]: # Select the rows that have at least one missing value flights[flights.isnull().any(axis=1)].head() Out[ ]:
  • 42. Missing Values 42 There are a number of methods to deal with missing values in the data frame: df.method() description dropna() Drop missing observations dropna(how='all') Drop observations where all cells is NA dropna(axis=1, how='all') Drop column if all the values are missing dropna(thresh = 5) Drop rows that contain less than 5 non-missing values fillna(0) Replace missing values with zeros isnull() returns True if the value is missing notnull() Returns True for non-missing values
  • 43. Missing Values 43 • When summing the data, missing values will be treated as zero • If all values are missing, the sum will be equal to NaN • cumsum() and cumprod() methods ignore missing values but preserve them in the resulting arrays • Missing values in GroupBy method are excluded (just like in R) • Many descriptive statistics methods have skipna option to control if missing data should be excluded . This value is set to True by default (unlike R)
  • 44. Aggregation Functions in Pandas 44 Aggregation - computing a summary statistic about each group, i.e. • compute group sums or means • compute group sizes/counts Common aggregation functions: min, max count, sum, prod mean, median, mode, mad std, var
  • 45. Aggregation Functions in Pandas 45 agg() method are useful when multiple statistics are computed per column: In [ ]: flights[['dep_delay','arr_delay']].agg(['min','mean','max']) Out[ ]:
  • 46. Basic Descriptive Statistics 46 df.method() description describe Basic statistics (count, mean, std, min, quantiles, max) min, max Minimum and maximum values mean, median, mode Arithmetic average, median and mode var, std Variance and standard deviation sem Standard error of mean skew Sample skewness kurt kurtosis
  • 47. Graphics to explore the data 47 To show graphs within Python notebook include inline directive: In [ ]: %matplotlib inline Seaborn package is built on matplotlib but provides high level interface for drawing attractive statistical graphics, similar to ggplot2 library in R. It specifically targets statistical data visualization
  • 48. Graphics 48 description distplot histogram barplot estimate of central tendency for a numeric variable violinplot similar to boxplot, also shows the probability density of the data jointplot Scatterplot regplot Regression plot pairplot Pairplot boxplot boxplot swarmplot categorical scatterplot factorplot General categorical plot
  • 49. 49
  • 50. Statistical Analysis & ML overview 50 statsmodel and scikit-learn - both have a number of function for statistical analysis The first one is mostly used for regular analysis using R style formulas, while scikit-learn and Tensorflow are more tailored for Machine Learning. statsmodels: • linear regressions • ANOVA tests • hypothesis testings • many more ... scikit-learn: • kmeans • support vector machines • random forests • many more ... Tensorflow: • Image Recognition • Neural Networks • Linear Models • TensorFlow Wide & Deep Learning • etc...
  • 51. 51 Big Data & Data Analytics working together
  • 52. Big Data & Data Analytics working together WORKING WITH BIG DATA: MAP-REDUCE • When working with large datasets, it’s often useful to utilize MapReduce. MapReduce is a method when working with big data which allows you to first map the data using a particular attribute, filter or grouping and then reduce those using a transformation or aggregation mechanism. For example, if I had a collection of cats, I could first map them by what color they are and then reduce by summing those groups. At the end of the MapReduce process, I would have a list of all the cat colors and the sum of the cats in each of those color groupings. • Almost every data science library has some MapReduce functionality built in. There are also numerous larger libraries you can use to manage the data and MapReduce over a series of computers (or a cluster / grouping of computers). Python can speak to these services and software and extract the results for further reporting, visualization or alerting. 52
  • 53. Big Data & Data Analytics working together Hadoop • If the most popular libraries for MapReduce with large datasets is Apache’s Hadoop. Hadoop uses cluster computing to allow for faster data processing of large datasets. There are many Python libraries you can use to send your data or jobs to Hadoop and which one you choose should be a mixture of what’s easiest and most simple to set up with your infastructure, and also what seems like the most clear library for your use case. Spark • If you have large data which might work better in streaming form (real-time data, log data, API data), then Apache’s Spark is a great tool. PySpark, the Python Spark API, allows you to quickly get up and running and start mapping and reducing your dataset. It’s also incredibly popular with machine learning problems, as it has some built-in algorithms. • There are several other large scale data and job libraries you can use with Python, but for now we can move along to looking at data with Python. 53
  • 54. Big Data & Data Analytics working together 54 Apache Spark is written in Scala programming language. To support Python with Spark, Apache Spark community released a tool, PySpark. Using PySpark, you can work with RDDs in Python programming language also. BigQuery is Google's serverless, highly scalable, low cost enterprise data warehouse. BigQuery allows organizations to capture and analyze data in real-time using its powerful streaming ingestion capability so that your insights are always current.
  • 55. Industries using Real-Time Big Data-Analytics • e-Commerce • Social Networks • Healthcare • Fraud Detection 55 OPTIMIZE CUSTOMER SERVICE PROCESS IN A FLOW OF CONTINUOUS DATA, MAKING LIFE SAVING DECISIONS IN A SAFE ENVIRONMENT TO RUN THE BUSINESS.
  • 57. Applications in the Pharma Industry 57
  • 58. 58
  • 59. 59