Machine-Learning Set 5
Machine-Learning Set 5
5 of 8 sets
401. Which of the following is true about Ridge or Lasso regression methods in
case of feature selection?
A. ridge regression uses subset selection of features
B. lasso regression uses subset selection of features
C. both use subset selection of features
D. none of above
Answer:B
404. If two variables are correlated, is it necessary that they have a linear
relationship?
A. yes
B. no
Answer:B
405. Which of the following option is true regarding Regression and Correlation
?Note: y is dependent variable and x is independent variable.
A. the relationship is symmetric between x and y in both.
B. the relationship is not symmetric between x and y in both.
C. the relationship is not symmetric between x and y in case of correlation but in case of
regression it is symmetric.
D. the relationship is symmetric between x and y in case of correlation but in case of regression it
is not symmetric.
Answer:D
406. Suppose you are using a Linear SVM classifier with 2 class classification
A. yes
B. no
Answer:A
407. If you remove the non-red circled points from the data, the decision boundary
will change?
A. true
B. false
Answer:B
408. When the C parameter is set to infinite, which of the following holds true?
A. the optimal hyperplane if exists, will be the one that completely separates the data
B. the soft-margin classifier will separate the data
C. none of the above
Answer:A
409. Suppose you are building a SVM model on data X. The data X can be error
prone which means that you should not trust any specific data point too much.
Now think that you want to build a SVM model which has quadratic kernel
function of polynomial degree 2 that uses Slack variable C as one of its hyper
parameter.What would happen when you use very large value of C(C->infinity)?
A. we can still classify data correctly for given setting of hyper parameter c
B. we can not classify data correctly for given setting of hyper parameter c
C. can?t say
411. The objective of the support vector machine algorithm is to find a hyperplane
in an N-dimensional space(N the number of features) that distinctly classifies the
data points.
A. true
B. false
Answer:A
412. Hyperplanes are boundaries that help classify the data points.
A. usual
B. decision
C. parallel
Answer:B
414. Hyperplanes are decision boundaries that help classify the data points.
A. true
B. false
Answer:A
416. In SVM, Kernel function is used to map a lower dimensional data into a
higher dimensional data.
A. true
B. false
Answer:A
421. can be adopted when it's necessary to categorize a large amount of data
with a few complete examples or when there's the need to impose some constraints
to a clustering algorithm.
423. In the last decade, many researchers started training bigger and bigger
models, built with several different layers that's why this approach is called .
A. deep learning
B. machine learning
C. reinforcement learning
D. unsupervised learning
Answer:A
426. If two variables are correlated, is it necessary that they have a linear
relationship?
A. yes
B. no
Answer:B
427. Correlated variables can have zero correlation coeffficient. True or False?
A. true
B. false
Answer:A
428. Suppose we fit Lasso Regression to a data set, which has 100 features
(X1,X2X100). Now, we rescale one of these feature by multiplying with 10 (say that
feature is X1), and then refit Lasso regression with the same regularization
parameter.Now, which of the following option will be correct?
A. it is more likely for x1 to be excluded from the model
B. it is more likely for x1 to be included in the model
C. can?t say
D. none of these
Answer:B
429. If Linear regression model perfectly first i.e., train error is zero, then
A. test error is also always zero
B. test error is non zero
C. couldn?t comment on test error
D. test error is equal to train error
Answer:C
430. Which of the following metrics can be used for evaluating regression models?
i) R Squared
ii) Adjusted R Squared
iii) F Statistics
iv) RMSE / MSE / MAE
A. ii and iv
B. i and ii
434. Which of the following methods do we use to find the best fit line for data in
Linear Regression?
A. least square error
B. maximum likelihood
C. logarithmic loss
D. both a and b
Answer:A
435. Suppose you are training a linear regression model. Now consider these
points.1. Overfitting is more likely if we have less data2. Overfitting is more likely
when the hypothesis space is small.Which of the above statement(s) are correct?
A. both are false
B. 1 is false and 2 is true
C. 1 is true and 2 is false
D. both are true
436. We can also compute the coefficient of linear regression with the help of an
analytical method called Normal Equation. Which of the following is/are true
about Normal Equation?1. We dont have to choose the learning rate2. It becomes
slow when number of features is very large3. No need to iterate
A. 1 and 2
B. 1 and 3.
C. 2 and 3.
D. 1,2 and 3.
Answer:D
437. Which of the following option is true regarding Regression and Correlation
?Note: y is dependent variable and x is independent variable.
A. the relationship is symmetric between x and y in both.
B. the relationship is not symmetric between x and y in both.
C. the relationship is not symmetric between x and y in case of correlation but in case of
regression it is symmetric.
D. the relationship is symmetric between x and y in case of correlation but in case of regression it
is not symmetric.
Answer:D
439. Generally, which of the following method(s) is used for predicting continuous
dependent variable?1. Linear Regression2. Logistic Regression
A. 1 and 2
B. only 1
C. only 2
D. none of these.
440. How many coefficients do you need to estimate in a simple linear regression
model (One independent variable)?
A. 1
B. 2
C. 3
D. 4
Answer:B
441. In a real problem, you should check to see if the SVM is separable and then
include slack variables if it is not separable.
A. true
B. false
Answer:B
442. Which of the following are real world applications of the SVM?
A. text and hypertext categorization
B. image classification
C. clustering of news articles
D. all of the above
Answer:D
443. 100 people are at party. Given data gives information about how many wear
pink or not, and if a man or not. Imagine a pink wearing guest leaves, was it a
man?
A. true
B. false
Answer:A
449. Lets say, you are working with categorical feature(s) and you have not looked
at the distribution of the categorical variable in the test data. You want to apply
one hot encoding (OHE) on the categorical feature(s). What challenges you may
face if you have applied OHE on a categorical variable of train dataset?
A. all categories of categorical variable are not present in the test dataset.
B. frequency distribution of categories is different in train as compared to the test dataset.
C. train and test always have same distribution.
D. both a and b
451. Which of the following method is used to find the optimal features for cluster
analysis
A. k-means
B. density-based spatial clustering
C. spectral clustering find clusters
D. all above
Answer:D
452. scikit-learn also provides functions for creating dummy datasets from scratch:
A. make_classification()
B. make_regression()
C. make_blobs()
D. all above
Answer:D
455. In which of the following each categorical label is first turned into a positive
integer and then transformed into a vector where only one feature is 1 while all the
others are 0.
A. labelencoder class
B. dictvectorizer
C. labelbinarizer class
D. featurehasher
Answer:C
456. is the most drastic one and should be considered only when the dataset
is quite large, the number of missing features is high, and any prediction could be
risky.
A. removing the whole line
B. creating sub-model to predict those features
C. using an automatic strategy to input them according to the other known values
D. all above
Answer:A
457. It's possible to specify if the scaling process must include both mean and
standard deviation using the parameters .
A. with_mean=true/false
B. with_std=true/false
C. both a & b
D. none of the mentioned
Answer:C
460. Suppose you have fitted a complex regression model on a dataset. Now, you
are using Ridge regression with tuning parameter lambda to reduce its complexity.
Choose the option(s) below which describes relationship of bias and variance with
lambda.
A. in case of very large lambda; bias is low, variance is low
B. in case of very large lambda; bias is low, variance is high
C. in case of very large lambda; bias is high, variance is low
D. in case of very large lambda; bias is high, variance is high
Answer:C
462. Which of the following method(s) does not have closed form solution for its
coefficients?
A. ridge regression
B. lasso
C. both ridge and lasso
465. Suppose that we have N independent variables (X1,X2 Xn) and dependent
variable is Y. Now Imagine that you are applying linear regression by fitting the
best fit line using least square error on this data. You found that correlation
coefficient for one of its variable(Say X1) with Y is -0.95.Which of the following is
true for X1?
A. relation between the x1 and y is weak
B. relation between the x1 and y is strong
C. relation between the x1 and y is neutral
D. correlation can?t judge the relationship
Answer:B
466. We have been given a dataset with n records in which we have input attribute
as x and output attribute as y. Suppose we use a linear regression method to model
this data. To test our linear regressor, we split the data in training set and test set
randomly. Now we increase the training set size gradually. As the training set size
increases, what do you expect will happen with the mean training error?
A. increase
B. decrease
C. remain constant
467. We have been given a dataset with n records in which we have input attribute
as x and output attribute as y. Suppose we use a linear regression method to model
this data. To test our linear regressor, we split the data in training set and test set
randomly. What do you expect will happen with bias and variance as you increase
the size of training data?
A. bias increases and variance increases
B. bias decreases and variance increases
C. bias decreases and variance decreases
D. bias increases and variance decreases
Answer:D
468. Suppose, you got a situation where you find that your linear regression model
is under fitting the data. In such situation which of the following options would you
consider?
1. I will add more variables
2. I will start introducing polynomial degree variables
3. I will remove some variables
A. 1 and 2
B. 2 and 3
C. 1 and 3
D. 1, 2 and 3
Answer:A
470. For the given weather data, Calculate probability of not playing
A. 0.4
B. 0.64
C. 0.36
D. 0.5
Answer:C
472. The minimum time complexity for training an SVM is O(n2). According to
this fact, what sizes of datasets are not best suited for SVMs?
A. large datasets
B. small datasets
C. medium sized datasets
D. size does not matter
Answer:A
474. We usually use feature normalization before using the Gaussian kernel in
SVM. What is true about feature normalization?
1.We do feature normalization so that new feature will dominate other
2. Some times, feature normalization is not feasible in case of categorical variables
3. Feature normalization always helps when we use Gaussian kernel in SVM
A. 1
B. 1 and 2
C. 1 and 3
D. 2 and 3
Answer:B
475. Support vectors are the data points that lie closest to the decision surface.
A. true
B. false
478. Suppose you are using a Linear SVM classifier with 2 class classification
problem. Now you have been given the following data in which some points are
circled red that are representing support vectors.If you remove the following any
one red points from the data. Does the decision boundary will change?
A. yes
B. no
Answer:A
480. For the given weather data, what is the probability that players will play if
weather is sunny
A. 0.5
B. 0.26
C. 0.73
D. 0.6
Answer:D
484. can be adopted when it's necessary to categorize a large amount of data
with a few complete examples or when there's the need to
A. supervised
B. semi- supervised
C. reinforcement
D. clusters
Answer:B
488. When it is necessary to allow the model to develop a generalization ability and
avoid a common problem called .
A. overfitting
B. overlearning
C. classification
D. regression
Answer:A
489. Techniques involve the usage of both labeled and unlabeled data is called .
A. supervised
B. semi- supervised
C. unsupervised
D. none of the above
Answer:B
498. During the last few years, many algorithms have been applied to deep
neural networks to learn the best policy for playing Atari video games and to teach
an agent how to associate the right action with an input representing
the state.
A. logical
B. classical
C. classification
D. none of above
Answer:D
500. if there is only a discrete number of possible outcomes (called categories), the
process becomes a .
A. regression