0% found this document useful (0 votes)
62 views

Machine Learning Project Report (Group 3) Shahbaz Khan

This document summarizes a machine learning project that aimed to predict whether patients have heart disease based on their medical data. It describes using logistic regression, k-nearest neighbors, and random forest models to classify patients. The logistic regression model achieved the best accuracy of 88.5%. Evaluation metrics like the AUC-ROC curve and confusion matrix are discussed. In conclusion, while random forest had slightly lower accuracy than logistic regression, these models could likely be improved with hyperparameter tuning.

Uploaded by

Shahbaz khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views

Machine Learning Project Report (Group 3) Shahbaz Khan

This document summarizes a machine learning project that aimed to predict whether patients have heart disease based on their medical data. It describes using logistic regression, k-nearest neighbors, and random forest models to classify patients. The logistic regression model achieved the best accuracy of 88.5%. Evaluation metrics like the AUC-ROC curve and confusion matrix are discussed. In conclusion, while random forest had slightly lower accuracy than logistic regression, these models could likely be improved with hyperparameter tuning.

Uploaded by

Shahbaz khan
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

MACHINE LEARNING PROJECT REPORT

1. Introduction to problem being solved. Why is it necessary


to solve the problem?
A cardiologist measures vitals & hands this data to perform Data
Analysis and predict whether certain patients have Heart Disease.
We would like to make a Machine Learning algorithm where we
can train our AI to learn & improve from experience. Thus, we
would want to classify patients as either positive or negative for
Heart Disease.
Such information, if predicted well in advance, can provide
important insights to doctors who can then adapt their diagnosis
and treatment per patient basis.

Necessity to solve the problem: Heart disease is the leading


cause of death for men, women, and people of most racial and
ethnic groups in the world. One person dies every 36 seconds
from cardiovascular disease (CVDs). An estimated 17.9 million
people died from CVDs in 2019, representing 32% of all global
deaths. Of these deaths, 85% were due to heart attack and
stroke. Most of the deaths took place because of the late
detection of the disease. This is why it is important to detect
cardiovascular disease as early as possible so that
management with counselling and medicines can begin.

2. Brief description about the machine learning models used

We have Used following models in our project:-

Logistic Regression
Logistic Regression is a “Supervised machine learning”
algorithm that can be used to model the probability of a certain
class or event. It is used when the data is linearly separable and
the outcome is binary or dichotomous in nature.That means
Logistic regression is usually used for Binary classification
problems.Binary Classification refers to predicting the output
variable that is discrete in two classes.
A logistic regression model predicts a dependent data variable by
analyzing the relationship between one or more existing
independent variables.
Independent variables can be numeric or categorical
variables, but the dependent variable will always be
categorical.
A few examples of Binary classification are Yes/No, Pass/Fail,
Win/Lose, Cancerous/Non-cancerous, etc.

K-nearest-neighbor algorithm

A k-nearest-neighbor algorithm(Supervised machine learning),


often abbreviated k-nn, is an approach to data classification that
estimates how likely a data point is to be a member of one
group or the other depending on what group the data points
nearest to it are in.
K-NN algorithm assumes the similarity between the new
case/data and available cases and put the new case into the
category that is most similar to the available categories.
The k-nearest-neighbor is an example of a "lazy learner"
algorithm, meaning that it does not build a model using the
training set until a query of the data set is performed.
Random forest

A random forest is a Supervised machine learning technique


that’s used to solve regression and classification problems. It
utilizes ensemble learning, which is a technique that combines
many classifiers to provide solutions to complex problems.
A random forest algorithm consists of many decision trees. The
‘forest’ generated by the random forest algorithm is trained
through bagging or bootstrap aggregating. Bagging is an
ensemble meta-algorithm that improves the accuracy of machine
learning algorithms.

3. Brief description about quantitative evaluations used (For


example confusion matrix) and visual evaluations used (For
example AUC and ROC).
AUC ROC Curve:

The Receiver Operator Characteristic (ROC) curve is an


evaluation metric for binary classification problems. It is a
probability curve that plots the TPR against FPR at various
threshold values and essentially separates the ‘signal’ from the
‘noise’. The Area Under the Curve (AUC) is the measure of the
ability of a classifier to distinguish between classes and is used as
a summary of the ROC curve.
The higher the AUC, the better the performance of the model at
distinguishing between the positive and negative classes.

When AUC = 1, then the classifier is able to perfectly distinguish


between all the Positive and the Negative class points correctly.
If, however, the AUC had been 0, then the classifier would be
predicting all Negatives as Positives, and all Positives as
Negatives.

When 0.5<AUC<1, there is a high chance that the classifier will


be able to distinguish the positive class values from the negative
class values. This is so because the classifier is able to detect
more numbers of True positives and True negatives than False
negatives and False positives.

When AUC=0.5, then the classifier is not able to distinguish


between Positive and Negative class points. Meaning either the
classifier is predicting random class or constant class for all the
data points.

So, the higher the AUC value for a classifier, the better its ability
to distinguish between positive and negative classes.

How Does the AUC-ROC Curve Work?

In a ROC curve, a higher X-axis value indicates a higher number


of False positives than True negatives. While a higher Y-axis
value indicates a higher number of True positives than False
negatives. So, the choice of the threshold depends on the ability
to balance between False positives and False negatives.

Let’s dig a bit deeper and understand how our ROC curve would
look like for different threshold values and how the specificity and
sensitivity would vary.
We can try and understand this graph by generating a confusion
matrix for each point corresponding to a threshold and talk about
the performance of our classifier:

Point A is where the Sensitivity is the highest and Specificity the


lowest. This means all the Positive class points are classified
correctly and all the Negative class points are classified
incorrectly.

In fact, any point on the blue line corresponds to a situation where


True Positive Rate is equal to False Positive Rate.

All points above this line correspond to the situation where the
proportion of correctly classified points belonging to the Positive
class is greater than the proportion of incorrectly classified points
belonging to the Negative class.
Although Point B has the same Sensitivity as Point A, it has a
higher Specificity. Meaning the number of incorrectly Negative
class points is lower compared to the previous threshold. This
indicates that this threshold is better than the previous one.

Between points C and D, the Sensitivity at point C is higher than


point D for the same Specificity. This means, for the same
number of incorrectly classified Negative class points, the
classifier predicted a higher number of Positive class points.
Therefore, the threshold at point C is better than point D.

Now, depending on how many incorrectly classified points we


want to tolerate for our classifier, we would choose between point
B or C for predicting whether you can defeat me in PUBG or not.
Point E is where the Specificity becomes highest. Meaning there
are no False Positives classified by the model. The model can
correctly classify all the Negative class points! We would choose
this point if our problem was to give perfect song
recommendations to our users.

It is here that both, the Sensitivity and Specificity, would be the


highest and the classifier would correctly classify all the Positive
and Negative class points.

Confusion Matrix:

A confusion matrix is a summary of prediction results on a


classification problem.
The number of correct and incorrect predictions are summarized
with count values and broken down by each class. This is the key
to the confusion matrix.

The confusion matrix shows the ways in which your classification


model
is confused when it makes predictions.
It gives you insight not only into the errors being made by your
classifier but more importantly the types of errors that are being
made.

It is this breakdown that overcomes the limitation of using


classification accuracy alone.

Four outcomes of classification:

A binary classifier predicts all data instances of a test dataset as


either positive or negative. This classification (or prediction)
produces four outcomes – true positive, true negative, false
positive and false negative.

True positive (TP): correct positive prediction


False positive (FP): incorrect positive prediction
True negative (TN): correct negative prediction
False negative (FN): incorrect negative prediction.

First two basic measures from the confusion matrix


Error rate (ERR) and accuracy (ACC) are the most common and
intuitive measures derived from the confusion matrix.
Error rate
Error rate (ERR) is calculated as the number of all incorrect
predictions divided by the total number of the dataset. The best
error rate is 0.0, whereas the worst is 1.0.

Accuracy
Accuracy (ACC) is calculated as the number of all correct
predictions divided by the total number of the dataset. The best
accuracy is 1.0, whereas the worst is 0.0. It can also be
calculated by 1 – ERR.

4. Results and discussion


The project involved analysis of the heart disease patient dataset
with proper data processing. Then, 4 models were trained and
tested with maximum scores as follows:
1. K Neighbors Classifier: 68.8%
2. Logistic Regression: 88.5%
3. Random Forest Classifier: 85.2%
The Logistic Regression Model gives us the best accuracy of
88.5%. These model can be further improved by using hyper
meter tuning.
The comparison of accuracy of models is depicted below.

5. Conclusions
With the increasing number of deaths due to heart diseases, it
has become mandatory to develop a system to predict heart
diseases effectively and accurately. The motivation for the study
was to find the most efficient ML algorithm for detection of heart
diseases. This study compares the accuracy score of Decision
Tree, Logistic Regression, Random Forest and Naive Bayes
algorithms for predicting heart disease using UCI machine
learning repository dataset. The result of this study indicates that
the Random Forest algorithm is the most efficient algorithm with
accuracy score of 90.16% for prediction of heart disease. In future
the work can be enhanced by developing a web application based
on the Random Forest algorithm as well as using a larger dataset
as compared to the one used in this analysis which will help to
provide better results and help health professionals in predicting
the heart disease effectively and efficiently.

6. Implementation code
The Jupyter Notebook Of the project is attached here.

SUBMITTED BY:- Group 3


Abhishek agrawal
Monal singh
Sushant mishra
Vedika agrawal
Shikhar lohiya
Shahbaz khan
Aman pandey

You might also like