Week 7
Week 7
Test Questions
Lecture Details: Week 7
Answer: B
Explanation:
3 0 1 2 1
1 X 1
x= = 1 + 1 + 1 = 1
3 i=1 3
2 1 0 1
∴ Option B is correct.
2. (2 points) The covariance matrix C = n1 Σni=1 (xi − x̄)(xi − x̄)T of the data points x1 , x2 , x3
is
0 0 0
A. 0 0 0
0 0 0
0.5 0.5 0.5
B. 0.5 0.5 0.5
0.5 0.5 0.5
Course: Machine Learning - Foundations Page 2 of 6
0.67 0 −0.67
C. 0 0 0
−0.67 0 0.67
1 0 0
D. 0 1 0
0 0 1
Answer: C
Explanation: To solve this question we first take the data and center it by subtracting
the mean. Doing this will give us the centered dataset.
−1 0 1
x 1 − x = 0 , x2 − x = 0 , x3 − x = 0
1 0 −1
1 n
Now, we can use the formula C = Σ (x
n i=1 i
− x̄)(xi − x̄)T to get the covariance ma-
trix C.
0 −1
1 0 0 0 1 0 −1
1 0
C= 0 0 + 0 0 0 + 0 0 0
3
−1
0 1 0 0 0 −1 0 1
2 0−2
1 0 0
= 0
3
−2 02
0.67 0 −0.67
≈ 0 0 0
−0.67 0 0.67
∴ Option C is correct.
3. (2 points) The eigenvalues of the covariance matrix C = n1 Σni=1 (xi − x̄)(xi − x̄)T are
A. 2, 0, 0
B. 1, 1, 1
C. 1.34, 0, 0
D. 0.5, 0, 0.5
Answer: C
Course: Machine Learning - Foundations Page 3 of 6
Explanation: To find the eigenvalues, we find the characteristic polynomial and then
find the roots.
0.67 − λ 0 −0.67
|C − λI| = 0 −λ 0
−0.67 0 0.67 − λ
= (0.67 − λ)(−λ)(0.67 − λ) + (−0.67)(−0.67λ)
= −λ3 + 1.34λ2 − 0.45λ + 0.45λ
= λ2 (1.34 − λ)
Solving for the roots, we get
|C − λI| = 0 =⇒ λ = 1.34, 0, 0
∴ Option C is correct.
4. (2 points) The eigenvectors of the covariance matrix C = n1 Σni=1 (xi − x̄)(xi − x̄)T are
(Note: The eigenvectors should be arranged in the descending order of eigenvalues from
left to right in the matrix.)
1 0 1
A. 0 1 0
1 0 1
0.71 0 1
B. 0 0.71 0
0.71 0.71 0
−0.71 0 0.71
C. 0 1 0
0.71 0 0.71
0.33 0 0
D. 0.33 1 0
0.34 0 1
Answer: C
Explanation: To solve this question, lets consider the eigenvalues one by one and find
the eigenvectors.
Course: Machine Learning - Foundations Page 4 of 6
Now, λ = 0,
E0 = null(C)
0.67 0 −0.67
= null 0 0 0
−0.67 0 0.67
1 0 −1
= null 0 0 0
0 0 0
0 1
= col 1 0
0 1
Keep in mind that the eigenvectors themselves can be scaled so, scaling it appropriately,
we can see that the columns of option C match our eigenvectors.
∴ Option C is correct.
5. (2 points) The data points x1 , x2 , x3 are projected onto the one dimensional space using
PCA as points z1 , z2 , z3 respectively. (Use eigenvector with the maximum eigenvalue for
this projection.)
1 1 1
A. z1 = 1, z2 = 1, z3 = 1
1 1 1
Course: Machine Learning - Foundations Page 5 of 6
0.5 0 −0.5
B. z1 = 0.5, z2 = 0, z3 = −0.5
0.5 0 −0.5
0 1 2
C. z1 = 2 , z2 = 1 , z3 = 2
2 1 0
0 1.0082 2.0164
D. z1 = 1 , z2 = 1 , z3 = 1
2.0164 1.0082 0
Answer: D
∴ Option D is correct.
n
1X
6. (1 point) The approximation error J on the given data set is given by ||xi − zi ||2 .
n i=1
What is the reconstruction error?
A. 6.724 × 10−4
B. 5
C. 10
D. 20
Answer: A
Course: Machine Learning - Foundations Page 6 of 6
Explanation: Let us plugin the values of xi and zi into the formula and find the ap-
proximation error.
3
1 X
J= ||xi − z1 ||2
3 i=1
1 1 1
1 2
= 2
|| 1 || + || 1 || + || 1 ||2
3
1 1 1
=3
∴ Option A is correct.