0% found this document useful (0 votes)
39 views

III-II CSE - ML MID 2 - OBJ - Set-1

III-II CSE_ML MID 2_OBJ_Set-1

Uploaded by

Mr. RAVI KUMAR I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views

III-II CSE - ML MID 2 - OBJ - Set-1

III-II CSE_ML MID 2_OBJ_Set-1

Uploaded by

Mr. RAVI KUMAR I
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Code No: CS601PC

AURORA’S SCIENTIFIC AND TECHNOLOGICAL INSTITUTE


B .Tech. III Year II Sem., II Mid-Term Examinations, August-2024
MACHINE LEARNING (CSE)
Objective Exam
Name: ______________________________ Hall Ticket No. A
Answer All Questions. All Questions Carry Equal Marks. Time: 20 Min. Marks: 10.
I. Choose the correct alternative:
1. Any hypothesis found to approximate the target function well over a sufficiently large set of training examples will
also approximate the target function well over other unobserved examples. [ ]
(A) Inductive Hypothesis (B) Hypothesis (C) Learning (D) Concept Learning

2. The _________algorithm computes the version space containing all hypotheses from H that are consistent with an
observed sequence of training examples. [ ]
(A) Candidate Elimination (B) Artificial Neural Network (C) Inductive Hypothesis (D) None

3. Minimum Description Principle is a version of _________that can be interpreted within a Bayesian Network. [ ]
(A) Occam’s razor (B) Selection measure (C) ID3 (D) PAC

4. A perceptron calculates a linear combination of these inputs, and then outputs ____ [ ]
(A) -1 or 0 (B) 0 or 1 (C) 1 or -1 (D) None

5. If the training examples are not linearly separable, the delta rule converges toward an approximation to the target
concept. [ ]
(A) Over fit (B) Under fit (C) Best fit (D) Doesn’t fit

6. The __________of L is any minimal set of assertions B such that for any target concept c and corresponding training
examples Dc. [ ]
(A) Version space (B) candidate elimination (C) Inductive bias (D) None

7. ___________is a significant practical difficulty for decision tree learning and many other learning methods. [ ]
(A) Over fitting (B) Under fitting (C) Doesn’t fit (D) Best fitting

8. One successful method for finding high accuracy hypotheses is a technique called___________. [ ]
(A) Post-pruning (B) Under fitting (C) Doesn’t fit (D) Best fitting

9. Concept learning inferred a ______ valued function from training examples of its input and output. [ ]
(A)Boolean (B) Hexadecimal (C) Decimal (D) All the above

10. The general tasks that are performed with back propagation algorithm. [ ]
(A) Pattern mapping (B) Prediction (C) Function approximation (D) All the above

II Fill in the Blanks:


11. __________________learning methods provide a robust approach to approximating real-valued, discrete valued, and
vector-valued target functions.
12. Learning algorithms to acquire only some approximation to the target function and for this reason the process of
learning the target function is often called __________________.

13. In learning to play checkers, the system might learn from ________training examples consisting of individual
checkers board states and the correct move for each.
14. Naive Baye’s algorithm is based on ____________ and used for solving classification problems.
15. Full form of MDL is ____________________________________.
16. The back propagation law is also known as ________________________ .
17. Neural Networks are complex __________________ functions with many parameters.
18. The number of different types of layers in radial basis function neural networks are ___________.
19. _______________________________ are the neural networks that are applied to time series data.
20. Confidence intervals can be easily derived using _____________________________ theorem.
* * *
Descriptive Exam
Answer any TWO (2) questions. Each question carries 5 marks. Marks: 2 x 5 = 10
1. Discuss in detail about representation of Neural Networks.
2. Describe briefly about k-nearest neighbor algorithm.
3. Explain about Baye’s theorem.
4. Describe the Naive Bayesian method of classification.

Code No: CS601PC


AURORA’S SCIENTIFIC AND TECHNOLOGICAL INSTITUTE
B .Tech. III Year II Sem., II Mid-Term Examinations, August-2024
MACHINE LEARNING (CSE)
Objective Exam
Name: ______________________________ Hall Ticket No.
A
Answer All Questions. All Questions Carry Equal Marks. Time: 20 Min. Marks: 10.
I. Choose the correct alternative:
1. Baye’s rule can be used for ________________. [ ]
(A) Answering probabilistic query (B) Increasing complexity (C) Solving queries (D) Decreasing complexity

2. ________ is the consequence between a node and its predecessors while creating Bayesian network? [ ]
(A) Conditionally independent (B) Functionally dependent (C) Both A & B (D) None

3. The instance-based learner is a ____________. [ ]


(A) Lazy-learner (B) Eager learner (C) No learner (D) None

4. Which of the following is correct about the Naive Bayes? [ ]


(A) Assumes that all the features in a dataset are independent
(B) Assumes that all the features in a dataset are equally important
(C) Both A & B (D) None

5. For the analysis of ML algorithms, we need, ___________. [ ]


(A) Computational learning theory (B) Statistical learning theory (C) Both A & B (D) None

6. When would the genetic algorithm terminate? [ ]


(A) Maximum number of generations has been produced
(B) Satisfactory fitness level has been reached for the population(C) Both A & B (D) None

7. GA techniques are inspired by _________ biology. [ ]


(A) Evolutionary (B) Cytology (C) Anatomy (D) Ecology

8. _____ terms are required for building a Baye’s model. [ ]


(A) 3 (B) 2 (C) 1 (D) 4

9. The algorithm operates by iteratively updating a pool of hypotheses, called the ______. [ ]
(A) Population (B) Fitness (C) Selection (D) None

10. What are the advantages of nearest neighbour algorithm? [ ]


(A) Training is very fast (B) Can learn complex target functions (C) Don’t lose information (D) All the above

II Fill in the Blanks:


11. A perceptron calculates a linear combination of these inputs, and then outputs __________ .
12. Concept learning inferred a ___________ valued function from training examples of its input and output.
13. The activation function that is most widely used in perceptron network is _____________________.
14. The impact of high variance on the training set is _________________.
15. MLE estimates are often undesirable because they have ________________ variance.
16. The hypothesis that is more consistent with the training data D is called ________________ hypothesis.
17. Bayesian belief networks are also called _________________________ networks.
18. ______________________ are motivated by analogy to biology evolution.
19. In _____________________________, hypothesis manipulated are computer programs rather than bit strings.
20. _______________ effect is used by individual learning to alter after the course of evolution.

* * *

Descriptive Exam
Answer any TWO (2) questions. Each question carries 5 marks. Marks: 2 x 5 = 10
1. Discuss Briefly about Genetic algorithms in detail.
2. Design the Brute Force Bayesian concept learning algorithm and elaborate.
3. Explain back-propagation algorithm in detail.
4. Explain the inductive analytical approaches to learning

You might also like