Datascience Lab
Datascience Lab
5. Regression 9
6. Z-test 11
7. T-test 13
8. Anova 15
Experiment No: 1
Program:
import pandas as pd
data = {"calories": [420, 380, 390], "duration": [50,
40,
45]}
#load data into a DataFrame
object: df = pd.DataFrame(data)
print (df.loc[0])
Output:
calories 420
duration 50
Name: 0, dtype: int64
Program:
import matplotlib.pyplot as plt
a = [1, 2, 3, 4, 5]
b = [0, 0.6, 0.2, 15, 10, 8, 16, 21]
plt.plot(a)
# o is for circles and r
is # for red
plt.plot(b, "or")
plt.plot(list(range(0, 22,
3))) # naming the x-axis
plt.xlabel('Day ->')
# naming the y-axis
plt.ylabel('Temp ->')
c = [4, 2, 6, 8, 3, 20, 13, 15]
plt.plot(c, label = '4th
Rep') # get current axes
command
ax = plt.gca()
# get command over the
individual # boundary line of
the graph body
ax.spines['right'].set_visible(Fa
lse)
ax.spines['top'].set_visible(Fals
e) # set the range or the bounds
of
# the left boundary line to fixed
range ax.spines['left'].set_bounds(-
3, 40)
# set the interval by
which # the x-axis set the
marks
plt.xticks(list(range(-3, 10)))
Downloaded by Saravanan Sujatha
L.4 Fundamentals of Data Science
Output:
Program:
# Python program to get average of a list
# Importing the NumPy
module import numpy as np
# Taking a list of elements
list = [2, 40, 2, 502, 177, 7, 9]
# Calculating average using
average() print(np.average(list))
Output:
105.57142857142857
# Python program to get variance of a list
# Importing the NumPy module
import numpy as np
# Taking a list of elements
list = [2, 4, 4, 4, 5, 5, 7,
9]
# Calculating variance using
var() print(np.var(list))
Output:
4.0
# Python program to get standard deviation of a list
# Importing the NumPy module
import numpy as np
# Taking a list of
elements list = [290, 124,
127, 899]
# Calculating
standard # deviation
using var()
print(np.std(list))
Output:
318.35750344541907
Experiment No: 4
Program:
#Normal curves
import matplotlib.pyplot as
plt import numpy as np
mu, sigma = 0.5, 0.1
s = np.random.normal(mu, sigma,
1000) # Create the bins and
histogram
count, bins, ignored = plt.hist(s, 20, normed=True)
Output:
Output:
0.953463
REGRESSION
Program:
import numpy as np
import matplotlib.pyplot as
plt def estimate_coef(x, y):
# number of
observations/points n =
np.size(x)
# mean of x and y vector
m_x = np.mean(x)
m_y = np.mean(y)
# calculating cross-deviation and deviation about x
SS_xy = np.sum(y*x) - n*m_y*m_x
SS_xx = np.sum(x*x) - n*m_x*m_x
# calculating regression
coefficients b_1 = SS_xy / SS_xx
b_0 = m_y - b_1*m_x
return (b_0, b_1)
def plot_regression_line(x, y, b):
# plotting the actual points as scatter plot
plt.scatter(x, y, color = "m",
marker = "o", s =
30) # predicted response
vector y_pred = b[0] + b[1]*x
# plotting the regression line
plt.plot(x, y_pred, color =
"g") # putting labels
plt.xlabel('x')
plt.ylabel('y')
# function to show plot
plt.show()
def main():
# observations / data
x = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
y = np.array([1, 3, 2, 5, 7, 8, 8, 9, 10, 12])
# estimating
coefficients b =
estimate_coef(x, y)
print("Estimated coefficients:\nb_0 = {} \
\nb_1 = {}".format(b[0],
b[1])) # plotting regression line
plot_regression_line(x, y, b)
if name == " main ":
main()
Output:
Estimated coefficients:
b_0 = -0.0586206896552
b_1 = 1.45747126437
Experiment No: 6
Z-TEST
Program:
# imports
import math
import numpy as np
from numpy.random import randn
from statsmodels.stats.weightstats import ztest
# Generate a random array of 50 numbers having mean 110
and sd 15
# similar to the IQ scores data we assume
above mean_iq = 110
sd_iq = 15/math.sqrt(50)
alpha = 0.05
null_mean =100
data =
sd_iq*randn(50)+mean_iq #
print mean and sd
print('mean=%.2f stdv=%.2f' % (np.mean(data), np.std(data)))
# now we perform the test. In this function, we passed
data, in the value parameter
# we passed mean value in the null hypothesis, in
alternative hypothesis we check whether the
# mean is larger
ztest_Score,p_value=ztest(data,value=null_mean,alternative='
la rger')
# the function outputs a p_value and z-score corresponding
to that value, we compare the
# p-value with alpha, if it is greater than alpha then
we do not null hypothesis
# else we reject it.
if(p_value < alpha):
print("Reject Null
Hypothesis")
else:
print("Fail to Reject NUll Hypothesis")
Output:
Reject Null Hypothesis
Experiment No: 7
T-TEST
Program:
# Importing the required libraries and
packages import numpy as np
from scipy import stats
# Defining two random
distributions # Sample Size
N = 10
# Gaussian distributed data with mean = 2 and var
= 1 x = np.random.randn(N) + 2
# Gaussian distributed data with mean = 0 and var
= 1 y = np.random.randn(N)
# Calculating the Standard Deviation
# Calculating the variance to get the standard
deviation var_x = x.var(ddof = 1)
var_y = y.var(ddof =
1) # Standard
Deviation
SD = np.sqrt((var_x + var_y) / 2)
print("Standard Deviation =",
SD) # Calculating the T-
Statistics
tval = (x.mean() - y.mean()) / (SD * np.sqrt(2 /
N)) # Comparing with the critical T-Value
# Degrees of freedom
dof = 2 * N - 2
# p-value after comparison with the T-
Statistics pval = 1 - stats.t.cdf( tval, df
= dof) print("t = " + str(tval))
print("p = " + str(2 * pval))
Output:
Standard Deviation =
0.7642398582227466 t =
4.87688162540348
p = 0.0001212767169695983
t = 4.876881625403479
p = 0.00012127671696957205
Experiment No: 8
ANOVA
Program:
# Installing the
package
install.packages("dplyr
") # Loading the package
library(dplyr)
# Variance in mean within group and between group
boxplot(mtcars$disp~factor(mtcars$gear),
xlab = "gear", ylab = "disp")
# Step 1: Setup Null Hypothesis and Alternate
Hypothesis # H0 = mu = mu01 = mu02 (There is no
difference
# between average displacement for different
gear) # H1 = Not all means are equal
# Step 2: Calculate test statistics using aov function
mtcars_aov <- aov(mtcars$disp~factor(mtcars$gear))
summary(mtcars_aov)
# Step 3: Calculate F-Critical Value
# For 0.05 Significant value, critical value = alpha =
0.05 # Step 4: Compare test statistics with F-Critical
value
# and conclude test p <alpha, Reject Null Hypothesis
Output:
Experiment No: 9
Program
# Importing the necessary
libraries import pandas as pd
import numpy as np
import matplotlib.pyplot as
plt import seaborn as sns
from sklearn.datasets import load_boston
sns.set(style=”ticks”,color_codes=True)
plt.rcParams[‘figure.figsize’] = (8,5)
plt.rcParams[‘figure.dpi’] = 150
# loading the databoston = load_boston()
You can check those keys with the following code.
print(boston.keys())
The output will be as follow:
dict_keys([‘data’, ‘target’, ‘feature_names’,
‘DESCR’, ‘filename’])
print(boston.DESCR)
Experiment No: 10
Program
Output :
Optimization terminated successfully.
Current function value: 0.352707
Iterations 8
# printing the summary table
print(log_reg.summary())
Output :
Logit Regression Results
=============================================================
Dep. Variable: admitted No. Observations: 30
Model: Logit Df Residuals: 27
Method: MLE Df Model: 2
Date: Wed, 15 Jul 2020 Pseudo R-squ.: 0.4912
Time: 16:09:17 Log-Likelihood: -10.581
Output :
Optimization terminated successfully.
Current function value: 0.352707
Iterations 8
Actual values [0, 0, 0, 0, 0, 1, 1, 0, 1, 1]
Predictions : [0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
Output :
Confusion Matrix :
[[6 0]
[2 2]]
Test accuracy = 0.8
Experiment No: 11
Program
We are using Superstore sales data .
import warnings
import itertools
import numpy as
np
import matplotlib.pyplot as plt
warnings.filterwarnings("ignore")
plt.style.use('fivethirtyeight')
import pandas as pd
import statsmodels.api as sm
import matplotlibmatplotlib.rcParams['axes.labelsize'] = 14
matplotlib.rcParams['xtick.labelsize'] = 12
matplotlib.rcParams['ytick.labelsize'] = 12
matplotlib.rcParams['text.color'] = 'k'
We start from time series analysis and forecasting for furniture sales.
df=pd.read_excel("Superstore.xls")
furniture = df.loc[df['Category'] ==
'Furniture'] A good 4-year furniture sales
data.
furniture['Order Date'].min(), furniture['Order Date'].max()
Timestamp(‘2014–01–06 00:00:00’), Timestamp(‘2017–12–30
00:00:00’)
Data Preprocessing
This step includes removing columns we do not need, check missing values,
aggregate sales by date and so on.
cols = ['Row ID', 'Order ID', 'Ship Date', 'Ship Mode', 'Customer ID',
'Customer Name', 'Segment', 'Country', 'City', 'State', 'Postal Code', 'Region', 'Product
ID', 'Category', 'Sub-Category', 'Product Name', 'Quantity', 'Discount', 'Profit']
furniture.drop(cols,axis=1,inplace=True)
furniture=furniture.sort_values('Order
Date')furniture.isnull().sum()
furniture=furniture.groupby('OrderDate')
['Sales'].sum().reset_ index()
Order Date 0
Sales 0
dtype:
int64
Figure 1
Figure 2
We will use the averages daily sales value for that month instead, and we are using
the start of each month as the timestamp.
y = furniture
['Sales'].resample('MS').mean() Have a
quick peek 2017 furniture sales data.
y['2017':]
Figure 3