0% found this document useful (0 votes)
8 views

OBJECTIVE SET - 2 With Answers

The document contains a 20 question multiple choice and fill in the blank exam on machine learning topics. The exam covers concepts like features, SVMs, evaluation metrics, Naive Bayes assumptions, dimensionality reduction, overfitting, influential ML pioneers, and algorithms like decision trees, random forests, EM, and Gibbs sampling.

Uploaded by

Adeib Arief
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views

OBJECTIVE SET - 2 With Answers

The document contains a 20 question multiple choice and fill in the blank exam on machine learning topics. The exam covers concepts like features, SVMs, evaluation metrics, Naive Bayes assumptions, dimensionality reduction, overfitting, influential ML pioneers, and algorithms like decision trees, random forests, EM, and Gibbs sampling.

Uploaded by

Adeib Arief
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

ACADEMIC YEAR: 2022-23 SET NO: 2

III B.TECH II SEMESTER (R18) CSE & CSD I MID EXAMINATIONS, MAY - 2023
MACHINE LEARNING
OBJECTIVE EXAM
NAME_____________________________ HALL TICKET NO A

Answer all the questions. All questions carry equal marks. Time: 20min. 10 marks.
I choose correct alternative:
1. [ ]
Feature can be used as a________________ (CO2,K5)
A. predictor B. binary split C. All of above D. None of above

2. [ ]
The effectiveness of an SVM depends upon________________ (CO2,k6)
A. kernel parameters B. selection of kernel C. soft margin parameter
D. All of the above

3. Which of the following evaluation metrics can not be applied in case of logistic [ ]
regression output to compare with target? CO3,k1)
D.
A. accuracy B. auc-roc C. logloss
mean-squared-error

4. [ ]
A measurable property or parameter of the data-set is______________(CO1,k4)
A. training data B. test data C. feature D. validation data

5. Which of the following can only be used when training data are linearly separable [ ]
?(CO3,k5)
A. linear logistic D. the centroid
B. linear soft margin svm C. linear hard-margin svm
regression method
6. Impact of high variance on the training set ? (CO3,k5) [ ]

A. under fitting B. over fitting C. both under fitting & D. depends upon the

over fitting dataset


7. [ ]
The father of machine learning is _____________ (CO1,k1)
A. Geoffrey Everest
B. Geoffrey Hill C. Geoffrey chaucer D. None of the above
Hilton
8. Which of the following is true about Naive Bayes? (CO3,k1) [ ]

A. Assumes that B. Assumes that all


all the features the features in a D.None of the
C. Both A and B
in a dataset dataset are above options
are equally independent
important
Which of the following is a reasonable way to select the number of principal
9. [ ]
components “k”? (CO3,k1)

A. Choose k to
be the
C. Choose k to be D. Choose k to be
smallest
99% of m (k = the largest value
value so that B. Use the elbow
0.99*m, rounded so that 99% of
at least 99% method
to the nearest the variance is
of the retained
integer)
variance is
retained

10. .............. is a widely used and effective machine learning algorithm [ ]


based on the idea of bagging. (CO3,k1)
A. Regression B. Classification C. Decision Tree D. Random Forest

II Fill in the Blanks:

11. Missing data items are __________________ with Bayes classifier. (CO3,k3)

12. Backpropagation is to update each of the _____________________ in the network (CO2,k1)

13. ______________________ is a machine learning algorithm based on supervised learning (CO1,k2)

_________algorithm finds the most specific hypothesis that fits all the positive examples.
14.
(C01,k2)
15.
______________ learning is a logical approach to machine learning (CO1,k4)
16. _____________ is basically the learning task of the machine (CO1,k6)
____________ is defined as the supposition or proposed explanation based on insufficient
17.
evidence (CO2,k4)
18.
__________ theory predictions are used to predict generative learning algorithms (CO2,k5)

_________________ is considered a latent variable model to find the local maximum likelihood
19. parameters of a statistical model (CO3,k3)

20. Gibbs algorithm is at most _______________ error of the Bayes optimal classifier. (CO3,k4)

-ooOoo—
Multiple Choice:

1. A. predictor (Features are characteristics that can be used to predict target

variables.)

2. D. All of the above (Effectiveness of SVM hinges on kernel parameters, selection,

and soft margin parameter.)

3. D. mean-squared-error (Mean squared error is for continuous values, while logistic

regression outputs probabilities.)

4. C. feature (A feature is a measurable property of the data.)

5. C. linear hard-margin SVM (Linear hard-margin SVM requires linearly separable

data.)

6. B. over fitting (High variance leads to models that fit the training data too closely but

generalize poorly.)

7. D. None of the above (Machine learning has a long history with many contributors.)

8. D. None of the above (Naive Bayes assumes conditional independence between

features, not independence or equal importance.)

9. B. Use the elbow method (The elbow method visually helps choose the number of

principal components for good variance retention.)

10. D. Random Forest (Random Forest is a powerful ensemble method based on

bagging.)

Fill in the Blanks:

11. imputed (Missing data in Bayes classifiers is often imputed using techniques like

mean/median imputation or more sophisticated methods.)

12. weights (Backpropagation updates weights in the network to minimize error.)

13. Support Vector Machine (SVM) (SVM is a supervised learning algorithm for

classification and regression.)


14. FIND S (The FIND S algorithm seeks the most specific hypothesis that aligns with all

positive training examples.)

15. Symbolic learning (Symbolic learning represents knowledge using symbols and

rules.)

16. Concept learning (Concept learning is the core task of learning a general concept

from training data.)

17. Hypothesis (A hypothesis is an unproven assumption based on limited evidence.)

18. Bayesian (Bayesian theory predictions guide generative learning algorithms to

model data probability distributions.)

19. Expectation-Maximization (EM) algorithm (The EM algorithm is a latent variable

model for optimizing parameters of statistical models with missing data.)

20. guaranteed to achieve (The Gibbs sampling algorithm achieves at most the Bayes

optimal classifier's error under certain conditions.)

You might also like