Al Evaluation
Al Evaluation
Evaluation
Evaluation refers to systematically checking and analysing the merit,
correct- ness and reliability of an AI model based on the outputs
produced by it.
Evaluation Metrics
Evaluation Metrics refers to the measures used to test the quality of
the Al model.
The Causes behind the Performance of AI Model are:
1.Overfitting
Overfitting refers to a situation when an Al model performs so well as
the test data it got, fitted exactly against its training data and thus AI
model always produced correct result.
2.Underfitting
Underfitting refers to a situation when an Al model is not complex
enough to capture the structure and relationships of its training data
and predict effective outcomes.
3. Generalization
Generalization refers to how well the concepts learned by a machine
learning model apply to specific examples not seen by the model
when it was learning. The goal of a good machine learning model is to
generalize well from the training data to any data from the problem
domain. This allows us to make predictions in the future on data the
model has never seen.
Ideally, an Al model should be balanced between underfitting and
overfitting to be a good fit.
Confusion Matrix
A Confusion Matrix is a technique using a chart or table for
summarizing the performance of a classification-based Al model by
listing the predicted values of an Al model and the actual/correct
outcome values.
A confusion table includes both predictive and actual values in
context of AI model, which are:
◆ the Actual Value represents the actual result (observed or
measured).
True
Actual Values
False