0% found this document useful (0 votes)
5 views

UNIT 3-Practice Sheet 3 (1)

The document consists of objective and subjective type questions related to the evaluation of AI models, including concepts like prediction accuracy, confusion matrix, and evaluation metrics such as precision and recall. It also includes scenarios for identifying false positives and negatives, as well as tasks for calculating performance metrics. Additionally, it discusses the implications of overfitting and underfitting in model evaluation.

Uploaded by

aabu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

UNIT 3-Practice Sheet 3 (1)

The document consists of objective and subjective type questions related to the evaluation of AI models, including concepts like prediction accuracy, confusion matrix, and evaluation metrics such as precision and recall. It also includes scenarios for identifying false positives and negatives, as well as tasks for calculating performance metrics. Additionally, it discusses the implications of overfitting and underfitting in model evaluation.

Uploaded by

aabu
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

UNIT 3- Evaluation-Practice Sheet

OBJECTIVE TYPE QUESTIONS

1) The output given by the AI machine is known as ________ (Prediction/ Reality)


2) Sarthak made a face mask detector system for which he had collected the dataset and used all
the dataset to train the model. Then, he used the same data to evaluate the model which
resulted in the correct answer all the time but was not able to perform with unknown dataset.
Name the concept.
3) Which evaluation parameter takes into consideration all the correct predictions?
4) Which one of the following scenario result in a high false positive cost?
(a) viral outbreak (b) forest fire (c) flood (d) spam filter
5) _____________ is used to record the result of comparison between the prediction
and reality. It is not an evaluation metric but a record which can help in evaluation.
6) Raunak was learning the conditions that make up the confusion matrix. He came
across a scenario in which the machine that was supposed to predict an animal was always
predicting not an animal. What is this condition called?
(a) False Positive (b) True Positive (c) False Negative (d) True Negative
7) Which two evaluation methods are used to calculate F1 Score?
(a) Precision and Accuracy (b) Precision and Recall
c) Accuracy and Recall (d) Precision, F1 score
8) Which of the following statements is not true about overfitting models?
(a) This model learns the pattern and noise in the data to such extent that it harms the
performance of the model on the new dataset
(b) Training result is very good and the test result is poor
(c) It interprets noise as patterns in the data (d) The training accuracy and test accuracy both
are low
9) Priya was confused with the terms used in the evaluation stage. Suggest her the term
used for the percentage of correct predictions out of all the observations.
(a) Accuracy (b) Precision (c) Recall (d) F1 Score
10) Statement 1:Confusion matrix is an evaluation metric
Statement 2: Confusion matrix is a record which helps in evaluation
(a) Both statement 1 and statement 2 are correct.
(b) Both statement 1 and statement 2 are correct.
(c) Statement 1 is correct and statement 2 is incorrect.
(d) Statement 1 is incorrect and statement 2 is correct.
11) In spam mail detection,which of the followinh will be considered as ‘False Negative’?
(a) When a legitimate email is accurately identified as not spam
(b) When a spam email is mistakenly identified as legitimate.
(c) When an email is accurately identified as spam.
(d) When an email is inaccurately labelled as important.
12) ___________ is one of the parameter for evaluating a model’s performance and is defined
as fraction of positive cases that are correctly identified.
(a) Accuracy (b) Precision (c) Recall (d) F1 Score
SUBJECTIVE TYPE QUESTIONS
1) Draw the confusion matrix for the following data:
• the number of true positive = 32 • the number of true negative 28
• the number of false positive = 51 • the number of false negative = 6
2) An AI model made the following sales prediction for a new mobile phone which they
have recently launched:
(i) Identify the total number of wrong predictions made by the model.
(ii) Calculate precision, recall and F1 Score.
Note:All steps of calculation and formula must be shown.

3) Automated trade industry has developed an AI model which predicts the selling and
purchasing of automobiles. During testing, the AI model came up with the following
predictions. Note:All steps of calculation and formula must be shown.
i)Identify the total number of correct predictions made by the model.
ii)How many total tests have been performed in the above scenario?
(ii) Calculate precision, recall and F1 Score.

4) A binary classification model has been developed to classify news articles as


either”Fake News” or “Real News” .The model was tested on a data set of 500 news
articles and the resulting confusion matrix is as follows:
Reality
Yes No
Prediction Yes 45 15
No 20 420

5) What do you mean by Evaluation of an AI model ?Also explain the difference between
the concept of underfitting and overfitting with respect to AI model Evaluation.

You might also like