0% found this document useful (0 votes)
72 views

Midterm Lab Quiz 2 - Attempt Review

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
72 views

Midterm Lab Quiz 2 - Attempt Review

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Started on Tuesday, 18 June 2024, 3:09 PM

State Finished
Completed on Tuesday, 18 June 2024, 3:21 PM
Time taken 11 mins 26 secs
Marks 15.00/20.00
Grade 75.00 out of 100.00

Question 1
Correct

Mark 1.00 out of 1.00

What is the disadvantage of the Naive Bayes classifier?

a. It is unable to handle large amounts of data


b. It is inflexible
c. It is less accurate
d. It is slower to train and predict

Question 2
Incorrect

Mark 0.00 out of 1.00

What is the main advantage of the Hebb rule?

a. It is able to handle nonlinear relationships


b. It is able to handle large datasets

c. It is fast to converge
d. It is easy to implement

Question 3

Correct

Mark 1.00 out of 1.00

What is supervised learning used for?

a. Regression tasks

b. Both classification and regression tasks

c. Classification tasks
d. Unsupervised learning tasks
Question 4

Correct

Mark 1.00 out of 1.00

What is the EM algorithm used to estimate in the "E" step?

a. The model parameters

b. The likelihood of the model


c. The prediction accuracy of the model
d. The latent variables

Question 5

Correct

Mark 1.00 out of 1.00

What is the "M" step in the EM algorithm?

a. The step where the prediction accuracy of the model is calculated

b. The step where the likelihood of the model is maximized

c. The step where the expectation of the latent variables is calculated


d. The step where the model parameters are updated

Question 6

Correct

Mark 1.00 out of 1.00

What is the Kullback-Leibler (KL) distance used for?

a. To measure the uncertainty of a probability distribution

b. To measure the similarity between two probability distributions

c. To measure the predictability of a probability distribution


d. To measure the dissimilarity between two probability distributions

Question 7
Incorrect

Mark 0.00 out of 1.00

What is the advantage of the Naive Bayes classifier over other classifiers?

a. It is more flexible

b. It is faster to train and predict

c. It is able to handle large amounts of data


d. It is more accurate
Question 8

Correct

Mark 1.00 out of 1.00

What is the Naive Bayes classifier used for?

a. To predict the value of a continuous variable

b. To classify data into different categories based on certain features

c. All of the above


d. To predict the probability of an event occurring

Question 9
Correct

Mark 1.00 out of 1.00

What is the least squares method used for?

a. To solve systems of linear equations

b. To find the line of best fit for a set of data

c. To calculate the mean of a data set


d. To calculate the variance of a data set

Question 10
Correct

Mark 1.00 out of 1.00

What is the EM algorithm used for?

a. All of the above

b. Classification

c. Regression
d. Clustering

Question 11

Correct

Mark 1.00 out of 1.00

What is the advantage of using the Gaussian Naive Bayes classifier over other types of Naive Bayes classifiers?

a. It is able to handle continuous features

b. It is able to handle categorical features

c. It is more accurate
d. It is faster to train and predict
Question 12

Correct

Mark 1.00 out of 1.00

What is the main advantage of using a directed acyclic graph (DAG) over other types of graphs?

a. DAGs are easier to understand and visualize

b. DAGs can represent more complex relationships between data

c. All of the above


d. DAGs are more efficient for storing and processing data

Question 13

Correct

Mark 1.00 out of 1.00

What is the EM algorithm used to optimize in the "M" step?

a. The prediction accuracy of the model

b. The model parameters

c. The latent variables


d. The likelihood of the model

Question 14
Incorrect

Mark 0.00 out of 1.00

What is the equation for the Hebb rule?

a. w(new) = w(old) + η(input - output)x(target)

b. w(new) = w(old) + η(output)x(input)

c. w(new) = w(old) + η(output - target)x(input)


d. w(new) = w(old) + η(target - output)x(input)

Question 15
Incorrect

Mark 0.00 out of 1.00

What is the main goal of the EM algorithm?

a. To maximize the likelihood of a model given the data

b. To maximize the prediction accuracy of the model


c. To minimize the cost or loss function of a model
d. To minimize the error between the predicted and actual values of the data
Question 16

Correct

Mark 1.00 out of 1.00

What is the "E" step in the EM algorithm?

a. The step where the prediction accuracy of the model is calculated

b. The step where the likelihood of the model is maximized

c. The step where the expectation of the latent variables is calculated


d. The step where the model parameters are updated

Question 17

Incorrect

Mark 0.00 out of 1.00

What is the Hebb rule?

a. A rule used to calculate the output of a neural network

b. A rule used to determine the input to a neural network

c. A rule used to adjust the weights in a neural network


d. A rule used to determine the structure of a neural network

Question 18

Correct

Mark 1.00 out of 1.00

What is the main disadvantage of the Hebb rule?

a. It is prone to overfitting

b. It is unable to handle large datasets

c. It is slow to converge
d. It is unable to handle nonlinear relationships

Question 19

Correct

Mark 1.00 out of 1.00

What is the learning rule for a perceptron called?

a. The Hebbian Rule

b. The Perceptron Learning Algorithm

c. The Delta Rule


d. The Backpropagation Algorithm
Question 20

Correct

Mark 1.00 out of 1.00

What is the assumption made by the Naive Bayes classifier?

a. That the features in the data are normally distributed

b. That the features in the data are dependent on each other


c. That the features in the data are uniformly distributed
d. That the features in the data are independent of each other

◄ Midterm Quiz 2

Jump to...

Module 9 ►

You might also like