Vertopal.com C1 W2 Lab03 Feature Scaling and Learning Rate Soln
Vertopal.com C1 W2 Lab03 Feature Scaling and Learning Rate Soln
(Multi-variable)
Goals
In this lab you will:
Tools
You will utilize the functions developed in the last lab as well as matplotlib and NumPy.
import numpy as np
import matplotlib.pyplot as plt
from lab_utils_multi import load_house_data, run_gradient_descent
from lab_utils_multi import norm_plot, plt_equal_scale, plot_cost_i_w
from lab_utils_common import dlc
np.set_printoptions(precision=2)
plt.style.use('./deeplearning.mplstyle')
Notation
|General Notation | Description| Python (if applicable) | |: ------------|:
------------------------------------------------------------|| | a | scalar, non bold || | a | vector,
bold || | A | matrix, bold capital || | Regression | | | | | X | training example maxtrix | X_train |
| y | training example targets | y_train | x (i ), y (i ) | i t hTraining Example | X[i], y[i]| | m | number
of training examples | m| | n | number of features in each example | n| | w | parameter: weight, | w |
| b | parameter: bias | b |
| f w ,b ( x (i )) | The result of the model evaluation at x (i ) parameterized by w ,b : f w ,b ( x (i )) =w ⋅ x ( i) +b |
∂ J ( w , b)
f_wb | | | the gradient or partial derivative of cost with respect to a parameter w j |
∂wj
∂ J ( w , b)
dj_dw[j]| | | the gradient or partial derivative of cost with respect to a parameter b |
∂b
dj_db|
Problem Statement
As in the previous labs, you will use the motivating example of housing price prediction. The
training data set contains many examples with 4 features (size, bedrooms, floors and age)
shown in the table below. Note, in this lab, the Size feature is in sqft while earlier labs utilized
1000 sqft. This data set is larger than the previous lab.
We would like to build a linear regression model using these values so we can then predict the
price for other houses - say, a house with 1200 sqft, 3 bedrooms, 1 floor, 40 years old.
Dataset:
Number of Number of Age of
Size (sqft) Bedrooms floors Home Price (1000s dollars)
952 2 1 65 271.5
1244 3 2 64 232
1947 3 2 17 509.8
... ... ... ... ...
# load the dataset
X_train, y_train = load_house_data()
X_features = ['size(sqft)','bedrooms','floors','age']
Let's view the dataset and its features by plotting each feature versus price.
Plotting each feature vs. the target, price, provides some indication of which features have the
strongest influence on price. Above, increasing size also increases price. Bedrooms and floors
don't seem to have a strong impact on price. Newer houses have higher prices than older
houses.
Gradient Descent With Multiple Variables
Here are the equations you developed in the last lab on gradient descent for multiple variables.:
where, n is the number of features, parameters w j , b , are updated simultaneously and where
m −1
∂ J ( w , b) 1
¿ ∑ ( f w , b ( x( i) ) − y ( i) ) x (ji )
∂wj m i=0
m −1
∂ J ( w , b) 1
∂b
¿ ∑ ( f ( x( i) ) − y ( i) )
m i=0 w , b
Learning Rate
The lectures discussed some of the issues related to setting the learning rate α . The learning
rate controls the size of the update to the parameters. See equation (1) above. It is shared by all
the parameters.
Let's run gradient descent and try a few settings of α on our data set
α = 9.9e-7
#set alpha to 9.9e-7
_, _, hist = run_gradient_descent(X_train, y_train, 10, alpha = 9.9e-
7)
Iteration Cost w0 w1 w2 w3 b
djdw0 djdw1 djdw2 djdw3 djdb
---------------------|--------|--------|--------|--------|--------|---
-----|--------|--------|--------|--------|
0 9.55884e+04 5.5e-01 1.0e-03 5.1e-04 1.2e-02 3.6e-04 -
5.5e+05 -1.0e+03 -5.2e+02 -1.2e+04 -3.6e+02
1 1.28213e+05 -8.8e-02 -1.7e-04 -1.0e-04 -3.4e-03 -4.8e-05
6.4e+05 1.2e+03 6.2e+02 1.6e+04 4.1e+02
2 1.72159e+05 6.5e-01 1.2e-03 5.9e-04 1.3e-02 4.3e-04 -
7.4e+05 -1.4e+03 -7.0e+02 -1.7e+04 -4.9e+02
3 2.31358e+05 -2.1e-01 -4.0e-04 -2.3e-04 -7.5e-03 -1.2e-04
8.6e+05 1.6e+03 8.3e+02 2.1e+04 5.6e+02
4 3.11100e+05 7.9e-01 1.4e-03 7.1e-04 1.5e-02 5.3e-04 -
1.0e+06 -1.8e+03 -9.5e+02 -2.3e+04 -6.6e+02
5 4.18517e+05 -3.7e-01 -7.1e-04 -4.0e-04 -1.3e-02 -2.1e-04
1.2e+06 2.1e+03 1.1e+03 2.8e+04 7.5e+02
6 5.63212e+05 9.7e-01 1.7e-03 8.7e-04 1.8e-02 6.6e-04 -
1.3e+06 -2.5e+03 -1.3e+03 -3.1e+04 -8.8e+02
7 7.58122e+05 -5.8e-01 -1.1e-03 -6.2e-04 -1.9e-02 -3.4e-04
1.6e+06 2.9e+03 1.5e+03 3.8e+04 1.0e+03
8 1.02068e+06 1.2e+00 2.2e-03 1.1e-03 2.3e-02 8.3e-04 -
1.8e+06 -3.3e+03 -1.7e+03 -4.2e+04 -1.2e+03
9 1.37435e+06 -8.7e-01 -1.7e-03 -9.1e-04 -2.7e-02 -5.2e-04
2.1e+06 3.9e+03 2.0e+03 5.1e+04 1.4e+03
w,b found by gradient descent: w: [-0.87 -0. -0. -0.03], b: -0.00
It appears the learning rate is too high. The solution does not converge. Cost is increasing rather
than decreasing. Let's plot the result:
The plot on the right shows the value of one of the parameters, w 0. At each iteration, it is
overshooting the optimal value and as a result, cost ends up increasing rather than approaching
the minimum. Note that this is not a completely accurate picture as there are 4 parameters
being modified each pass rather than just one. This plot is only showing w 0 with the other
parameters fixed at benign values. In this and later plots you may notice the blue and orange
lines being slightly off.
α = 9e-7
Let's try a bit smaller value and see what happens.
Cost is decreasing throughout the run showing that alpha is not too large.
On the left, you see that cost is decreasing as it should. On the right, you can see that w 0 is still
oscillating around the minimum, but it is decreasing each iteration rather than increasing. Note
above that dj_dw[0] changes sign with each iteration as w[0] jumps over the optimal value.
This alpha value will converge. You can vary the number of iterations to see how it behaves.
α = 1e-7
Let's try a bit smaller value for α and see what happens.
plot_cost_i_w(X_train,y_train,hist)
On the left, you see that cost is decreasing as it should. On the right you can see that w 0 is
decreasing without crossing the minimum. Note above that dj_w0 is negative throughout the
run. This solution will also converge, though not quite as quickly as the previous example.
Feature Scaling
The lectures described the importance of rescaling the dataset so the features have a similar
range. If you are interested in the details of why this is the case, click on the 'details' header
below. If not, the section below will walk through an implementation of how to do feature
scaling.
Let's look again at the situation with α = 9e-7. This is pretty close to the maximum value we can
set α to without diverging. This is a short run showing the first few iterations:
Above, while cost is being decreased, its clear that w 0 is making more rapid progress than the
other parameters due to its much larger gradient.
The graphic below shows the result of a very long run with α = 9e-7. This takes several hours.
Above, you can see cost decreased slowly after its initial reduction. Notice the difference
between w0 and w1,w2,w3 as well as dj_dw0 and dj_dw1-3. w0 reaches its near final value very
quickly and dj_dw0 has quickly decreased to a small value showing that w0 is near the final
value. The other parameters were reduced much more slowly.
• Feature scaling, essentially dividing each positive feature by its maximum value, or more
generally, rescale each feature by both its minimum and maximum values using
(x-min)/(max-min). Both ways normalizes features to the range of -1 and 1, where the
former method works for positive features which is simple and serves well for the
lecture's example, and the latter method works for any features.
• Mean normalization: $x_i := \dfrac{x_i - \mu_i}{max - min} $
• Z-score normalization which we will explore below.
z-score normalization
After z-score normalization, all features will have a mean of 0 and a standard deviation of 1.
To implement z-score normalization, adjust your input values as shown in this formula:
(i )
x j −μj
(i )
x =
j
σj
where j selects a feature or a column in the X matrix. µ j is the mean of all the values for feature
(j) and σ j is the standard deviation of feature (j).
m −1
1
μj ¿ ∑
m i=0
x (ji)
m −1
1
∑
2
2
σj ¿
m i=0
( x (ij ) − μ j )
Implementation
def zscore_normalize_features(X):
"""
computes X, zcore normalized by column
Args:
X (ndarray (m,n)) : input data, m examples, n features
Returns:
X_norm (ndarray (m,n)): input normalized by column
mu (ndarray (n,)) : mean of each feature
sigma (ndarray (n,)) : standard deviation of each feature
"""
# find the mean of each column/feature
mu = np.mean(X, axis=0) # mu will have shape
(n,)
# find the standard deviation of each column/feature
sigma = np.std(X, axis=0) # sigma will have
shape (n,)
# element-wise, subtract mu for that column from each example,
divide by std for that column
X_norm = (X - mu) / sigma
Let's look at the steps involved in Z-score normalization. The plot below shows the
transformation step by step.
mu = np.mean(X_train,axis=0)
sigma = np.std(X_train,axis=0)
X_mean = (X_train - mu)
X_norm = (X_train - mu)/sigma
ax[1].scatter(X_mean[:,0], X_mean[:,3])
ax[1].set_xlabel(X_features[0]); ax[0].set_ylabel(X_features[3]);
ax[1].set_title(r"X - $\mu$")
ax[1].axis('equal')
ax[2].scatter(X_norm[:,0], X_norm[:,3])
ax[2].set_xlabel(X_features[0]); ax[0].set_ylabel(X_features[3]);
ax[2].set_title(r"Z-score normalized")
ax[2].axis('equal')
plt.tight_layout(rect=[0, 0.03, 1, 0.95])
fig.suptitle("distribution of features before, during, after
normalization")
plt.show()
The plot above shows the relationship between two of the training set parameters, "age" and
"size(sqft)". These are plotted with equal scale.
• Left: Unnormalized: The range of values or the variance of the 'size(sqft)' feature is much
larger than that of age
• Middle: The first step removes the mean or average value from each feature. This leaves
features that are centered around zero. It's difficult to see the difference for the 'age'
feature, but 'size(sqft)' is clearly around zero.
• Right: The second step divides by the standard deviation. This leaves both features
centered at zero with a similar scale.
The peak to peak range of each column is reduced from a factor of thousands to a factor of 2-3
by normalization.
plt.show()
Notice, above, the range of the normalized data (x-axis) is centered around zero and roughly +/-
2. Most importantly, the range is similar for each feature.
Let's re-run our gradient descent algorithm with normalized data. Note the vastly larger value
of alpha. This will speed up gradient descent.
The scaled features get very accurate results much, much faster!. Notice the gradient of each
parameter is tiny by the end of this fairly short run. A learning rate of 0.1 is a good start for
regression with normalized features. Let's plot our predictions versus the target values. Note,
the prediction is made using the normalized feature while the plot is shown using the original
feature values.
• with multiple features, we can no longer have a single plot showing results versus
features.
• when generating the plot, the normalized features were used. Any predictions using the
parameters learned from a normalized training set must also be normalized.
Prediction The point of generating our model is to use it to predict housing prices that are not in
the data set. Let's predict the price of a house with 1200 sqft, 3 bedrooms, 1 floor, 40 years old.
Recall, that you must normalize the data with the mean and standard deviation derived when the
training data was normalized.
Cost Contours
Another way to view feature scaling is in terms of the cost contours. When feature scales do not
match, the plot of cost versus parameters in a contour plot is asymmetric.
In the plot below, the scale of the parameters is matched. The left plot is the cost contour plot of
w[0], the square feet versus w[1], the number of bedrooms before normalizing the features. The
plot is so asymmetric, the curves completing the contours are not visible. In contrast, when the
features are normalized, the cost contour is much more symmetric. The result is that updates to
parameters during gradient descent can make equal progress for each parameter.
Congratulations!
In this lab you:
• utilized the routines for linear regression with multiple features you developed in
previous labs
• explored the impact of the learning rate α on convergence
• discovered the value of feature scaling using z-score normalization in speeding
convergence
Acknowledgments
The housing data was derived from the Ames Housing dataset compiled by Dean De Cock for
use in data science education.