III-II CSE - ML MID 2 - OBJ - Set-1
III-II CSE - ML MID 2 - OBJ - Set-1
2. The _________algorithm computes the version space containing all hypotheses from H that are consistent with an
observed sequence of training examples. [ ]
(A) Candidate Elimination (B) Artificial Neural Network (C) Inductive Hypothesis (D) None
3. Minimum Description Principle is a version of _________that can be interpreted within a Bayesian Network. [ ]
(A) Occam’s razor (B) Selection measure (C) ID3 (D) PAC
4. A perceptron calculates a linear combination of these inputs, and then outputs ____ [ ]
(A) -1 or 0 (B) 0 or 1 (C) 1 or -1 (D) None
5. If the training examples are not linearly separable, the delta rule converges toward an approximation to the target
concept. [ ]
(A) Over fit (B) Under fit (C) Best fit (D) Doesn’t fit
6. The __________of L is any minimal set of assertions B such that for any target concept c and corresponding training
examples Dc. [ ]
(A) Version space (B) candidate elimination (C) Inductive bias (D) None
7. ___________is a significant practical difficulty for decision tree learning and many other learning methods. [ ]
(A) Over fitting (B) Under fitting (C) Doesn’t fit (D) Best fitting
8. One successful method for finding high accuracy hypotheses is a technique called___________. [ ]
(A) Post-pruning (B) Under fitting (C) Doesn’t fit (D) Best fitting
9. Concept learning inferred a ______ valued function from training examples of its input and output. [ ]
(A)Boolean (B) Hexadecimal (C) Decimal (D) All the above
10. The general tasks that are performed with back propagation algorithm. [ ]
(A) Pattern mapping (B) Prediction (C) Function approximation (D) All the above
13. In learning to play checkers, the system might learn from ________training examples consisting of individual
checkers board states and the correct move for each.
14. Naive Baye’s algorithm is based on ____________ and used for solving classification problems.
15. Full form of MDL is ____________________________________.
16. The back propagation law is also known as ________________________ .
17. Neural Networks are complex __________________ functions with many parameters.
18. The number of different types of layers in radial basis function neural networks are ___________.
19. _______________________________ are the neural networks that are applied to time series data.
20. Confidence intervals can be easily derived using _____________________________ theorem.
* * *
Descriptive Exam
Answer any TWO (2) questions. Each question carries 5 marks. Marks: 2 x 5 = 10
1. Discuss in detail about representation of Neural Networks.
2. Describe briefly about k-nearest neighbor algorithm.
3. Explain about Baye’s theorem.
4. Describe the Naive Bayesian method of classification.
2. ________ is the consequence between a node and its predecessors while creating Bayesian network? [ ]
(A) Conditionally independent (B) Functionally dependent (C) Both A & B (D) None
9. The algorithm operates by iteratively updating a pool of hypotheses, called the ______. [ ]
(A) Population (B) Fitness (C) Selection (D) None
* * *
Descriptive Exam
Answer any TWO (2) questions. Each question carries 5 marks. Marks: 2 x 5 = 10
1. Discuss Briefly about Genetic algorithms in detail.
2. Design the Brute Force Bayesian concept learning algorithm and elaborate.
3. Explain back-propagation algorithm in detail.
4. Explain the inductive analytical approaches to learning