Detection of Stroke Disease Using Machine Learning Algorithams Full
Detection of Stroke Disease Using Machine Learning Algorithams Full
Algorithms
Abstract :
A stroke is a medical condition in which poor blood flow to the brain results in cell death. It is
now a day a leading cause of death all over the world. Several risk factors believe to be related
to the cause of stroke has been found by inspecting the affected individuals. Using these risk
factors, a number of works have been carried out for predicting and classifying stroke diseases.
Most of the models are based on data mining and machine learning algorithms. In this work, we
have used four machine learning algorithms to detect the type of stroke that can possibly occur
or occurred form a person’s physical state and medical report data. We have collected a good
number of entries from the hospitals and use them to solve our problem. The classification
result shows that the result is satisfactory and can be used in real time medical report. We
believe that machine learning algorithms can help better understanding of diseases and can be
a good healthcare companion. Index Terms—Stroke, machine learning, WEKA, Naive Bayes, J48,
k-NN, Random Forest.
Introduction :
A stroke occurs due to poor blood flow to the brain which results in cell death. Two main types
of stroke are ischemic stroke and hemorrhagic stroke. Ischemic stroke occurs due to lack of
blood flow and hemorrhagic stroke occurs due to bleeding [1]. Another type of stroke is
transient ischemic attack. Ischemic stroke has two categories- embolic stroke and thrombotic
stroke. An embolic stroke occurs by forming a clot in any part of the body and moves toward
the brain and blocks blood flow. A thrombotic stroke caused by a clot that weakens blood flow
in an artery. Hemorrhagic stroke is classified into two types- subarachnoid hemorrhage and
intracerebral hemorrhage. Transient ischemic attack is also known as ”ministroke”.
A large number of people lose their life due to stroke and it is increasing in developing countries
[3]. There are several stroke risk factors that regulate different types of stroke. Predictive
algorithms help to understand the relation between these risk factors to types of strokes. The
machine learning algorithm can improve patients’ health through early detection and
treatment. We have used several machine learning algorithms to detect the type of stroke that
can occur in a patient or already occurred from their clinical report and statistical data. We
have built a stroke dataset by collecting data from various sources validated by medical experts.
Then the dataset was processed to be used with the machine learning algorithms. We have
built several models of classification. The result of the models is satisfactory and can be used in
a realtime patient’s stroke classification.
In recent years, there were published different works based on Machine Learning algorithms.
Some of them are discussed in here: Govindarajan et al. used Artificial Neural Networks (ANN),
Support Vector Machine (SVM), Decision Tree, Logistic Regression, and ensemble methods
(Bagging and Boosting) to classify the stroke disease [2]. They have collected the data from
Sugam Multispeciality Hospital, India which contains information about 507 stroke patients
ranging from 35 to 90 years of age. The novelty of their work is in the data processing phase,
where an algorithm called novel stemmer was used to attain the dataset. In their collected
dataset, 91.52% of patients were affected by ischemic stroke and only 8.48% of patients were
affected by hemorrhagic stroke. Among the mentioned algorithms, Artificial Neural Networks
with stochastic gradient descent learning algorithm have the highest accuracy with 95.3% for
classifying stroke. Jeena and Kumar proposed a model based on Support Vector Machine for
stroke prediction [4]. they have collected data from International Stroke trial Database [5]. The
dataset contains 12 risks factors (attributes). They have used 350 samples for their work. For
training purpose 300 samples and for testing 50 samples were used. Different kernel functions
like polynomial, quadratic, radial basis function and linear functions were applied. The highest
accuracy of 91% was found with the linear kernel which gives the balance measure F1-score F-
measure 91.7.
Singh and Choudhary developed a model with Artificial Neural Network (ANN) for stroke
prediction [6]. They have collected datasets from the Cardiovascular Health Study (CHS)
database. Three datasets were constructed which contains 212 strokes (all three) and 52, 69, 79
nonstroke respectively. The final dataset contains 357 attributes and 1824 entities with 212
occurrences of stroke. During feature selection, the C4.5 decision tree algorithm was used and
Principle Component Analysis (PCA) for dimension reduction. In ANN implementation they have
used Back Propagation learning method. They have got the accuracy as 95%, 95.2% and 97.7%
for the three datasets respectively.
Adam et al. have been developed a classification model for ischemic stroke using decision tree
algorithm and knearest neighbor (k-NN)[7]. Their dataset was collected from several hospitals
and medical centers in Sudan which is the first dataset for ischemic disease in Sudan. It contains
15 features and information about 400 patients. The results of the experiment show that the
performance of decision tree classification is higher than the performance of k-NN algorithm.
Sudha et al. used the Decision Tree, Bayesian Classifier, and Neural Network for stroke
classification [8]. Their dataset contains 1000 records. PCA algorithm was used for
dimensionality reduction. In ten rounds of each algorithm, they have got the highest accuracy
as 92%, 91%, and 94% in Neural Network, Naive Bayes classifier, and Decision tree algorithm
respectively. Some of the methods like [4] and [7] use a very small dataset. Govindarajan et al.
[2] have predicted only two classes of stroke. Therefore we have proposed a method which
uses a large dataset with four classes of stroke.
Our database contains string values which cannot be processed by WEKA. Therefore we had to
integer encoding for string values. For example, we have replaced the string ”Male”
with 0 and ”Female” with 1 and so on. Some attributes are missing in the dataset. Some of the
attributes do not apply for the individuals i.e. N/A. We replaced them with zero ”0” for avoiding
the null value exception. We also removed unnecessary information like ”3 times” used with
the frequency of vomiting replaced by only 3 etc. Data preprocessing example is shown in Table
II. B. Data mining process Waikato Environment for Knowledge Analysis (WEKA) is a machine
learning toolkit, developed and maintained by the University of Waikato, New Zealand [10].
Previous studies show that WEKA is a very reliable suite for machine learning. A large number
of similar works has been carried out using
weka and they have found it advantageous [11] [12] [13] [14]. We have used the built-in
algorithms in WEKA for stroke disease detection like Naive Bayes, Random Forest, and J48.
These algorithms are described previously. First, we import the data from the stroke database.
After pre-processing and integer encoding we apply WEKA to classify the strokes. The following
steps have been performed for stroke detection in WEKA: • Data pre-processing and
visualization .
• Attribute selection
• Test set and train set splitting
• Classification using different algorithms
The work-flow of data mining is given in Fig. 3
Literature Survey
We included patients aged 18 to 50 years with ischemic stroke, TIA or amaurosis fugax referred
to thrombophilia investigation at Aarhus University Hospital, Denmark from 1 January 2004 to
31 December 2012 (N = 685). Clinical information was obtained from the Danish Stroke Registry
and medical records. Thrombophilia investigation results were obtained from the laboratory
information system. Absolute thrombophilia prevalences and associated odds ratios (OR) with
95% confidence intervals (95% CI) were reported for ischemic stroke (N = 377) and TIA or
amaurosis fugax (N = 308). Thrombophilia prevalences for the general population were
obtained from published data.
This paper presents a prototype to classify stroke that combines text mining tools and machine
learning algorithms. Machine learning can be portrayed as a significant tracker in areas like
surveillance, medicine, data management with the aid of suitably trained machine learning
algorithms. Data mining techniques applied in this work give an overall review about the
tracking of information with respect to semantic as well as syntactic perspectives. The proposed
idea is to mine patients’ symptoms from the case sheets and train the system with the acquired
data. In the data collection phase, the case sheets of 507 patients were collected from Sugam
Multispecialty Hospital, Kumbakonam, Tamil Nadu, India. Next, the case sheets were mined
using tagging and maximum entropy methodologies, and the proposed stemmer extracts the
common and unique set of attributes to classify the strokes. Then, the processed data were fed
into various machine learning algorithms such as artificial neural networks, support vector
machine, boosting and bagging and random forests. Among these algorithms, artificial neural
networks trained with a stochastic gradient descent algorithm outperformed the other
algorithms with a higher classification accuracy of 95% and a smaller standard deviation of
14.69.
Early diagnosis of stroke is essential for timely prevention and treatment. Investigation shows
that measures extracted from various risk parameters carry valuable information for the
prediction of stroke. This research work investigates the various physiological parameters that
are used as risk factors for the prediction of stroke. Data was collected from International
Stroke Trial database and was successfully trained and tested using Support Vector Machine
(SVM). In this work, we have implemented SVM with different kernel functions and found that
linear kernel gave an accuracy of 90 %.
The International Stroke Trial (IST) is one of the largest randomized trials ever conducted on
individual patients in acute stroke. The IST dataset includes data on 19 435 patients with acute
stroke, with 99% complete follow-up. Over 26.4% patients were aged over 80 years at study
entry. Background stroke care was limited and none of the patients received thrombolytic
therapy. This clinical trial was conducted between 1991 and 1996 and a pilot phase between
1991 to and 1993. This study is a large, prospective, randomized controlled trial, with 100%
complete baseline data and over 99% complete follow-up data. For each randomized patient,
data were extracted on the variables assessed at randomization; the early outcome point was
14-days after randomization or prior discharge, and at 6-months and provided as an analyzable
database. The aim of the trial was to establish whether early administration of aspirin, heparin,
both or neither influenced the clinical course of an acute ischaemic stroke.
In today‟s world data mining plays a vital role for prediction of diseases in medical industry.
Stroke is a lifethreatning disease that has been ranked third leading cause of death in states and
in developing countries. The stroke is a leading cause of serious, long term disability in US. The
time taken to recover from stroke disease depends on patients‟ severity. Number of work has
been carried out for predicting various diseases by comparing the performance of predictive
data mining. Here the classification algorithms like Decision Tree, Naive Bayes and Neural
Network is used for predicting the presence of stroke disease with related number of attributes.
In our work, principle component analysis algorithm is used for reducing the dimensions and it
determines the attributes involving more towards the prediction of stroke disease and predicts
whether the patient is suffering from stroke disease or not.
3.3 SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:
SOFTWARE REQUIREMENTS:
FEASIBILITY STUDY
ECONOMICAL FEASIBILITY
TECHNICAL FEASIBILITY
SOCIAL FEASIBILITY
ECONOMICAL FEASIBILITY
This study is carried out to check the economic impact that the system will
have on the organization. The amount of fund that the company can pour into the
research and development of the system is limited. The expenditures must be
justified. Thus the developed system as well within the budget and this was
achieved because most of the technologies used are freely available. Only the
customized products had to be purchased.
TECHNICAL FEASIBILITY
This study is carried out to check the technical feasibility, that is, the
technical requirements of the system. Any system developed must not have a high
demand on the available technical resources. This will lead to high demands on the
available technical resources. This will lead to high demands being placed on the
client. The developed system must have a modest requirement, as only minimal or
null changes are required for implementing this system.
SOCIAL FEASIBILITY
The aspect of study is to check the level of acceptance of the system by the
user. This includes the process of training the user to use the system efficiently.
The user must not feel threatened by the system, instead must accept it as a
necessity. The level of acceptance by the users solely depends on the methods that
are employed to educate the user about the system and to make him familiar with
it. His level of confidence must be raised so that he is also able to make some
constructive criticism, which is welcomed, as he is the final user of the system.
4.SYSTEM DESIGN
GOALS:
The Primary goals in the design of the UML are as follows:
1. Provide users a ready-to-use, expressive visual modeling Language so that
they can develop and exchange meaningful models.
2. Provide extendibility and specialization mechanisms to extend the core
concepts.
3. Be independent of particular programming languages and development
process.
4. Provide a formal basis for understanding the modeling language.
5. Encourage the growth of OO tools market.
6. Support higher level development concepts such as collaborations,
frameworks, patterns and components.
7. Integrate best practices.
user
Comparison Graph
User
Dataset
SEQUENCE DIAGRAM:
5.SOFTWARE ENVIRONMENT
What is Python :-
Below are some facts about Python.
Python is currently the most widely used multi-purpose, high-level programming language.
Programmers have to type relatively less and indentation requirement of the language,
makes them readable all the time.
Python language is being used by almost all tech-giant companies like – Google,
Amazon, Facebook, Instagram, Dropbox, Uber… etc.
The biggest strength of Python is huge collection of standard library which can be used
for the following –
Machine Learning
GUI Applications (like Kivy, Tkinter, PyQt etc. )
Web frameworks like Django (used by YouTube, Instagram, Dropbox)
Image processing (like Opencv, Pillow)
Web scraping (like Scrapy, BeautifulSoup, Selenium)
Test frameworks
Multimedia
Advantages of Python :-
Python downloads with an extensive library and it contain code for various purposes like
regular expressions, documentation-generation, unit-testing, web browsers, threading,
databases, CGI, email, image manipulation, and more. So, we don’t have to write the
complete code for that manually.
2. Extensible
As we have seen earlier, Python can be extended to other languages. You can write some
of your code in languages like C++ or C. This comes in handy, especially in projects.
3. Embeddable
Complimentary to extensibility, Python is embeddable as well. You can put your Python
code in your source code of a different language, like C++. This lets us add scripting
capabilities to our code in the other language.
4. Improved Productivity
5. IOT Opportunities
Since Python forms the basis of new platforms like Raspberry Pi, it finds the future bright for
the Internet Of Things. This is a way to connect the language with the real world.
When working with Java, you may have to create a class to print ‘Hello World’. But in
Python, just a print statement will do. It is also quite easy to learn, understand, and code.
This is why when people pick up Python, they have a hard time adjusting to other more
verbose languages like Java.
7. Readable
Because it is not such a verbose language, reading Python is much like reading English.
This is the reason why it is so easy to learn, understand, and code. It also does not need
curly braces to define blocks, and indentation is mandatory. This further aids the
readability of the code.
8. Object-Oriented
Like we said earlier, Python is freely available. But not only can you download Python for
free, but you can also download its source code, make changes to it, and even distribute it. It
downloads with an extensive collection of libraries to help you with your tasks.
10. Portable
When you code your project in a language like C++, you may need to make some changes
to it if you want to run it on another platform. But it isn’t the same with Python. Here, you
need to code only once, and you can run it anywhere. This is called Write Once Run
Anywhere (WORA). However, you need to be careful enough not to include any system-
dependent features.
11. Interpreted
Lastly, we will say that it is an interpreted language. Since statements are executed one by
one, debugging is easier than in compiled languages.
Any doubts till now in the advantages of Python? Mention in the comment section.
Advantages of Python Over Other Languages
1. Less Coding
Almost all of the tasks done in Python requires less coding when the same task is done in
other languages. Python also has an awesome standard library support, so you don’t have to
search for any third-party libraries to get your job done. This is the reason that many people
suggest learning Python to beginners.
2. Affordable
Python is free therefore individuals, small companies or big organizations can leverage the
free available resources to build applications. Python is popular and widely used so it gives
you better community support.
The 2019 Github annual survey showed us that Python has overtaken Java in the most
popular programming language category.
Python code can run on any machine whether it is Linux, Mac or Windows. Programmers
need to learn different languages for different jobs but with Python, you can professionally
build web apps, perform data analysis and machine learning, automate things, do web
scraping and also build games and powerful visualizations. It is an all-rounder programming
language.
Disadvantages of Python
So far, we’ve seen why Python is a great choice for your project. But if you choose it, you
should be aware of its consequences as well. Let’s now see the downsides of choosing
Python over another language.
1. Speed Limitations
We have seen that Python code is executed line by line. But since Python is interpreted, it
often results in slow execution. This, however, isn’t a problem unless speed is a focal point
for the project. In other words, unless high speed is a requirement, the benefits offered by
Python are enough to distract us from its speed limitations.
3. Design Restrictions
As you know, Python is dynamically-typed. This means that you don’t need to declare the
type of variable while writing the code. It uses duck-typing. But wait, what’s that? Well, it
just means that if it looks like a duck, it must be a duck. While this is easy on the
programmers during coding, it can raise run-time errors.
5. Simple
No, we’re not kidding. Python’s simplicity can indeed be a problem. Take my example. I
don’t do Java, I’m more of a Python person. To me, its syntax is so simple that the verbosity
of Java code seems unnecessary.
This was all about the Advantages and Disadvantages of Python Programming Language.
History of Python : -
What do the alphabet and the programming language Python have in common? Right, both
start with ABC. If we are talking about ABC in the Python context, it's clear that the
programming language ABC is meant. ABC is a general-purpose programming language and
programming environment, which had been developed in the Netherlands, Amsterdam, at the
CWI (Centrum Wiskunde &Informatica). The greatest achievement of ABC was to influence
the design of Python.Python was conceptualized in the late 1980s. Guido van Rossum
worked that time in a project at the CWI, called Amoeba, a distributed operating system. In
an interview with Bill Venners1, Guido van Rossum said: "In the early 1980s, I worked as an
implementer on a team building a language called ABC at Centrum voor Wiskunde en
Informatica (CWI). I don't know how well people know ABC's influence on Python. I try to
mention ABC's influence because I'm indebted to everything I learned during that project
and to the people who worked on it."Later on in the same Interview, Guido van Rossum
continued: "I remembered all my experience and some of my frustration with ABC. I
decided to try to design a simple scripting language that possessed some of ABC's better
properties, but without its problems. So I started typing. I created a simple virtual machine, a
simple parser, and a simple runtime. I made my own version of the various ABC parts that I
liked. I created a basic syntax, used indentation for statement grouping instead of curly
braces or begin-end blocks, and developed a small number of powerful data types: a hash
table (or dictionary, as we call it), a list, strings, and numbers."
Before we take a look at the details of various machine learning methods, let's start by
looking at what machine learning is, and what it isn't. Machine learning is often categorized
as a subfield of artificial intelligence, but I find that categorization can often be misleading at
first brush. The study of machine learning certainly arose from research in this context, but
in the data science application of machine learning methods, it's more helpful to think of
machine learning as a means of building models of data.
At the most fundamental level, machine learning can be categorized into two main types:
supervised learning and unsupervised learning.
Human beings, at this moment, are the most intelligent and advanced species on earth
because they can think, evaluate and solve complex problems. On the other side, AI is still in
its initial stage and haven’t surpassed human intelligence in many aspects. Then the question
is that what is the need to make machine learn? The most suitable reason for doing this is,
“to make decisions, based on data, with efficiency and scale”.
Lately, organizations are investing heavily in newer technologies like Artificial Intelligence,
Machine Learning and Deep Learning to get the key information from data to perform
several real-world tasks and solve problems. We can call it data-driven decisions taken by
machines, particularly to automate the process. These data-driven decisions can be used,
instead of using programing logic, in the problems that cannot be programmed inherently.
The fact is that we can’t do without human intelligence, but other aspect is that we all need
to solve real-world problems with efficiency at a huge scale. That is why the need for
machine learning arises.
While Machine Learning is rapidly evolving, making significant strides with cybersecurity
and autonomous cars, this segment of AI as whole still has a long way to go. The reason
behind is that ML has not been able to overcome number of challenges. The challenges that
ML is facing currently are −
Quality of data − Having good-quality data for ML algorithms is one of the biggest
challenges. Use of low-quality data leads to the problems related to data preprocessing and
feature extraction.
No clear objective for formulating business problems − Having no clear objective and
well-defined goal for business problems is another key challenge for ML because this
technology is not that mature yet.
Issue of overfitting & underfitting − If the model is overfitting or underfitting, it cannot be
represented well for the problem.
Curse of dimensionality − Another challenge ML model faces is too many features of data
points. This can be a real hindrance.
Machine Learning is the most rapidly growing technology and according to researchers we
are in the golden year of AI and ML. It is used to solve many real-world complex problems
which cannot be solved with traditional approach. Following are some real-world applications
of ML −
Emotion analysis
Sentiment analysis
Speech synthesis
Speech recognition
Customer segmentation
Object recognition
Fraud detection
Fraud prevention
Arthur Samuel coined the term “Machine Learning” in 1959 and defined it as a “Field of
study that gives computers the capability to learn without being explicitly
programmed”.
And that was the beginning of Machine Learning! In modern times, Machine Learning is one
of the most popular (if not the most!) career choices. According to Indeed, Machine Learning
Engineer Is The Best Job of 2019 with a 344% growth and an average base salary
of $146,085 per year.
But there is still a lot of doubt about what exactly is Machine Learning and how to start
learning it? So this article deals with the Basics of Machine Learning and also the path you
can follow to eventually become a full-fledged Machine Learning Engineer. Now let’s get
started!!!
This is a rough roadmap you can follow on your way to becoming an insanely talented
Machine Learning Engineer. Of course, you can always modify the steps according to your
needs to reach your desired end-goal!
In case you are a genius, you could start ML directly but normally, there are some
prerequisites that you need to know which include Linear Algebra, Multivariate Calculus,
Statistics, and Python. And if you don’t know these, never fear! You don’t need a Ph.D.
degree in these topics to get started but you do need a basic understanding.
Both Linear Algebra and Multivariate Calculus are important in Machine Learning. However,
the extent to which you need them depends on your role as a data scientist. If you are more
focused on application heavy machine learning, then you will not be that heavily focused on
maths as there are many common libraries available. But if you want to focus on R&D in
Machine Learning, then mastery of Linear Algebra and Multivariate Calculus is very
important as you will have to implement many ML algorithms from scratch.
Data plays a huge role in Machine Learning. In fact, around 80% of your time as an ML
expert will be spent collecting and cleaning data. And statistics is a field that handles the
collection, analysis, and presentation of data. So it is no surprise that you need to learn it!!!
Some of the key concepts in statistics that are important are Statistical Significance,
Probability Distributions, Hypothesis Testing, Regression, etc. Also, Bayesian Thinking is
also a very important part of ML which deals with various concepts like Conditional
Probability, Priors, and Posteriors, Maximum Likelihood, etc.
Some people prefer to skip Linear Algebra, Multivariate Calculus and Statistics and learn
them as they go along with trial and error. But the one thing that you absolutely cannot skip
is Python! While there are other languages you can use for Machine Learning like R, Scala,
etc. Python is currently the most popular language for ML. In fact, there are many Python
libraries that are specifically useful for Artificial Intelligence and Machine Learning such
as Keras, TensorFlow, Scikit-learn, etc.
So if you want to learn ML, it’s best if you learn Python! You can do that using various
online resources and courses such as Fork Python available Free on GeeksforGeeks.
Now that you are done with the prerequisites, you can move on to actually learning ML
(Which is the fun part!!!) It’s best to start with the basics and then move on to the more
complicated stuff. Some of the basic concepts in ML are:
(a) Terminologies of Machine Learning
Model – A model is a specific representation learned from data by applying some machine
learning algorithm. A model is also called a hypothesis.
Feature – A feature is an individual measurable property of the data. A set of numeric
features can be conveniently described by a feature vector. Feature vectors are fed as input to
the model. For example, in order to predict a fruit, there may be features like color, smell,
taste, etc.
Target (Label) – A target variable or label is the value to be predicted by our model. For the
fruit example discussed in the feature section, the label with each set of input would be the
name of the fruit like apple, orange, banana, etc.
Training – The idea is to give a set of inputs(features) and it’s expected outputs(labels), so
after training, we will have a model (hypothesis) that will then map new data to one of the
categories trained on.
Prediction – Once our model is ready, it can be fed a set of inputs to which it will provide a
predicted output(label).
Supervised Learning – This involves learning from a training dataset with labeled data using
classification and regression models. This learning process continues until the required level of
performance is achieved.
Unsupervised Learning – This involves using unlabelled data and then finding the underlying
structure in the data in order to learn more and more about the data itself using factor and
cluster analysis models.
Semi-supervised Learning – This involves using unlabelled data like Unsupervised Learning
with a small amount of labeled data. Using labeled data vastly increases the learning accuracy
and is also more cost-effective than Supervised Learning.
Reinforcement Learning – This involves learning optimal actions through trial and error. So
the next action is decided by learning behaviors that are based on the current state and that will
maximize the reward in the future.
Advantages of Machine learning :-
Machine Learning can review large volumes of data and discover specific trends and patterns
that would not be apparent to humans. For instance, for an e-commerce website like Amazon, it
serves to understand the browsing behaviors and purchase histories of its users to help cater to
the right products, deals, and reminders relevant to them. It uses the results to reveal relevant
advertisements to them.
With ML, you don’t need to babysit your project every step of the way. Since it means giving
machines the ability to learn, it lets them make predictions and also improve the algorithms on
their own. A common example of this is anti-virus softwares; they learn to filter new threats as
they are recognized. ML is also good at recognizing spam.
3. Continuous Improvement
As ML algorithms gain experience, they keep improving in accuracy and efficiency. This lets
them make better decisions. Say you need to make a weather forecast model. As the amount of
data you have keeps growing, your algorithms learn to make more accurate predictions faster.
Machine Learning algorithms are good at handling data that are multi-dimensional and multi-
variety, and they can do this in dynamic or uncertain environments.
5. Wide Applications
You could be an e-tailer or a healthcare provider and make ML work for you. Where it does
apply, it holds the capability to help deliver a much more personal experience to customers
while also targeting the right customers.
Disadvantages of Machine Learning :-
1. Data Acquisition
Machine Learning requires massive data sets to train on, and these should be
inclusive/unbiased, and of good quality. There can also be times where they must wait for new
data to be generated.
ML needs enough time to let the algorithms learn and develop enough to fulfill their purpose
with a considerable amount of accuracy and relevancy. It also needs massive resources to
function. This can mean additional requirements of computer power for you.
3. Interpretation of Results
Another major challenge is the ability to accurately interpret results generated by the
algorithms. You must also carefully choose the algorithms for your purpose.
4. High error-susceptibility
Machine Learning is autonomous but highly susceptible to errors. Suppose you train an
algorithm with data sets small enough to not be inclusive. You end up with biased predictions
coming from a biased training set. This leads to irrelevant advertisements being displayed to
customers. In the case of ML, such blunders can set off a chain of errors that can go undetected
for long periods of time. And when they do get noticed, it takes quite some time to recognize
the source of the issue, and even longer to correct it.
Guido Van Rossum published the first version of Python code (version 0.9.0) at alt.sources in
February 1991. This release included already exception handling, functions, and the core data
types of list, dict, str and others. It was also object oriented and had a module system.
Python version 1.0 was released in January 1994. The major new features included in this
release were the functional programming tools lambda, map, filter and reduce, which Guido
Van Rossum never liked.Six and a half years later in October 2000, Python 2.0 was
introduced. This release included list comprehensions, a full garbage collector and it was
supporting unicode.Python flourished for another 8 years in the versions 2.x before the next
major release as Python 3.0 (also known as "Python 3000" and "Py3K") was released. Python
3 is not backwards compatible with Python 2.x. The emphasis in Python 3 had been on the
removal of duplicate programming constructs and modules, thus fulfilling or coming close to
fulfilling the 13th law of the Zen of Python: "There should be one -- and preferably only one --
obvious way to do it."Some changes in Python 7.3:
Purpose :-
Python
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.
Tensorflow
TensorFlow was developed by the Google Brain team for internal Google use. It was
released under the Apache 2.0 open-source license on November 9, 2015.
Numpy
Pandas
Matplotlib
Python features a dynamic type system and automatic memory management. It supports
multiple programming paradigms, including object-oriented, imperative, functional and
procedural, and has a large and comprehensive standard library.
Python is Interpreted − Python is processed at runtime by the interpreter. You do not need to
compile your program before executing it. This is similar to PERL and PHP.
Python is Interactive − you can actually sit at a Python prompt and interact with the
interpreter directly to write your programs.
Python also acknowledges that speed of development is important. Readable and terse
code is part of this, and so is access to powerful constructs that avoid tedious repetition of
code. Maintainability also ties into this may be an all but useless metric, but it does say
something about how much code you have to scan, read and/or understand to
troubleshoot problems or tweak behaviors. This speed of development, the ease with
which a programmer of other languages can pick up basic Python skills and the huge
standard library is key to another area where Python excels. All its tools have been quick to
implement, saved a lot of time, and several of them have later been patched and updated
by people with no Python background - without breaking.
There have been several updates in the Python version over the years. The question is how to
install Python? It might be confusing for the beginner who is willing to start learning Python but
this tutorial will solve your query. The latest or the newest version of Python is version 3.7.4 or
in other words, it is Python 3.
Note: The python version 3.7.4 cannot be used on Windows XP or earlier devices.
Before you start with the installation process of Python. First, you need to know about
your System Requirements. Based on your system type i.e. operating system and based
processor, you must download the python version. My system type is a Windows 64-bit
operating system. So the steps below are to install python version 3.7.4 on Windows 7 device
or to install Python 3. Download the Python Cheatsheet here.The steps on how to install Python
on Windows 10, 8 and 7 are divided into 4 parts to help understand better.
Step 1: Go to the official site to download and install python using Google Chrome or any other
web browser. OR Click on the following link: https://www.python.org
Now, check for the latest and the correct version for your operating system.
Step 3: You can either select the Download Python for windows 3.7.4 button in Yellow Color or
you can scroll further down and click on download with respective to their version. Here, we
are downloading the most recent python version for windows 3.7.4
Step 4: Scroll down the page until you find the Files option.
Step 5: Here you see a different version of python along with the operating system.
• To download Windows 32-bit python, you can select any one from the three options:
Windows x86 embeddable zip file, Windows x86 executable installer or Windows x86 web-
based installer.
•To download Windows 64-bit python, you can select any one from the three options: Windows
x86-64 embeddable zip file, Windows x86-64 executable installer or Windows x86-64 web-
based installer.
Here we will install Windows x86-64 web-based installer. Here your first part regarding which
version of python is to be downloaded is completed. Now we move ahead with the second part
in installing python i.e. Installation
Note: To know the changes or updates that are made in the version you can click on the Release
Note Option.
Installation of Python
Step 1: Go to Download and Open the downloaded python version to carry out the installation
process.
Step 2: Before you click on Install Now, Make sure to put a tick on Add Python 3.7 to PATH.
Step 3: Click on Install NOW After the installation is successful. Click on Close.
With these above three steps on python installation, you have successfully and correctly
installed Python. Now is the time to verify the installation.
Note: The installation process might take a couple of minutes.
Step 3: Click on IDLE (Python 3.7 64-bit) and launch the program
Step 4: To go ahead with working in IDLE you must first save the file. Click on File > Click
on Save
Step 5: Name the file and save as type should be Python files. Click on SAVE. Here I have
named the files as Hey World.
Step 6: Now for e.g. enter print
6.SYSTEM TEST
The purpose of testing is to discover errors. Testing is the process of trying to discover every
conceivable fault or weakness in a work product. It provides a way to check the functionality of
components, sub assemblies, assemblies and/or a finished product It is the process of exercising
software with the intent of ensuring that the Software system meets its requirements and user
expectations and does not fail in an unacceptable manner. There are various types of test. Each
test type addresses a specific testing requirement.
TYPES OF TESTS
Unit testing
Unit testing involves the design of test cases that validate that the internal
program logic is functioning properly, and that program inputs produce valid outputs. All
decision branches and internal code flow should be validated. It is the testing of individual
software units of the application .it is done after the completion of an individual unit before
integration. This is a structural testing, that relies on knowledge of its construction and is
invasive. Unit tests perform basic tests at component level and test a specific business process,
application, and/or system configuration. Unit tests ensure that each unique path of a business
process performs accurately to the documented specifications and contains clearly defined inputs
and expected results.
Integration testing
Integration tests are designed to test integrated software components to
determine if they actually run as one program. Testing is event driven and is more concerned
with the basic outcome of screens or fields. Integration tests demonstrate that although the
components were individually satisfaction, as shown by successfully unit testing, the
combination of components is correct and consistent. Integration testing is specifically aimed at
exposing the problems that arise from the combination of components.
Functional test
Functional tests provide systematic demonstrations that functions tested are
available as specified by the business and technical requirements, system documentation, and
user manuals.
Functional testing is centered on the following items:
System Test
System testing ensures that the entire integrated software system meets
requirements. It tests a configuration to ensure known and predictable results. An example of
system testing is the configuration oriented system integration test. System testing is based on
process descriptions and flows, emphasizing pre-driven process links and integration points.
Unit testing is usually conducted as part of a combined code and unit test phase
of the software lifecycle, although it is not uncommon for coding and unit testing to be
conducted as two distinct phases.
Field testing will be performed manually and functional tests will be written in
detail.
Test objectives
All field entries must work properly.
Pages must be activated from the identified link.
The entry screen, messages and responses must not be delayed.
Features to be tested
Verify that the entries are of the correct format
No duplicate entries should be allowed
All links should take the user to the correct page.
Integration Testing
Software integration testing is the incremental integration testing of two or more
integrated software components on a single platform to produce failures caused by interface
defects.
The task of the integration test is to check that components or software applications, e.g.
components in a software system or – one step up – software applications at the company level –
interact without error.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
Acceptance Testing
User Acceptance Testing is a critical phase of any project and requires significant participation
by the end user. It also ensures that the system meets the functional requirements.
Test Results: All the test cases mentioned above passed successfully. No defects encountered.
7.SCREENSHOTS
In above screen we can see dataset loaded and dataset contains so many missing
and non-numeric data so click on ‘Dataset Preprocessing & Features Selection’
button to process dataset and to get below output
In above graph x-axis represents 0 (normal) and 1 (stroke) and y-axis represents
number of instances available in those categories in dataset and now close above
graph and see below screen
In above screen we can see all dataset converted to numeric format and then split
dataset into train and test and now click on ‘Train Naïve Bayes Algorithm’ button
to train Naïve Bayes on above dataset and get below output
In above screen with Naïve Bayes we got 77% accuracy and in confusion matrix
graph we can see number of correct and incorrect prediction by Naïve Bayes. Now
click on ‘Train J48 Algorithm’ button to get below output
In above screen with J48 we got 73% accuracy and in confusion matrix graph we
can see number of correct and incorrect prediction by J48.Now close above Graph
and then click on ‘Run KNN Algorithm’ button to get below output
In above screen with KNN we got 69% accuracy and in confusion matrix graph we
can see number of correct and incorrect prediction by KNN.Now close above
Graph and then click on ‘Run Random Forest Algorithm’ button to get below
output
In above screen with Random Forest we got 78% accuracy and in confusion matrix
graph we can see number of correct and incorrect prediction by Random Forest.
Now close above Graph and then click on ‘Run ANN Algorithm’ button to get
below output
In above screen with ANN we got 78.33% accuracy and in confusion matrix graph
we can see number of correct and incorrect prediction by ANN and in all algorithm
ANN got high accuracy. Now close above Graph and then click on ‘Comparison
Graph’ button to get below graph
In above graph x-axis represents algorithm names and y-axis represents accuracy
and other metrics like precision, recall etc. different colour bar represents different
metrics and in all algorithms ANN got high accuracy
CONCLUSION :
In this paper, a sufficiently large dataset of stroke attacked patients has been classified
accurately. Four classifiers such as TABLE XI: Confusion matrix for Random Forest algorithm.
Naive Bayes, J48, k-NN, and Random Forest were used for detection of stroke disease. From the
performance analysis we see that Naive Bayes performs better than other methods. The
novelty and the main contribution of our work are collecting this dataset and preparing them to
use with WEKA. The model can help people with a cautionary indication of being affected by
stroke. Healthcare industries generate huge amounts of complex data about patients, hospitals
resources, disease diagnosis, electronic patient records, medical devices, etc. Which is very
difficult to relate to one another even by a field expert. It will help the clinician to better
understand the type of disease. The limitations of our method are that the dataset is not
perfectly symmetrical. However, it did not affect the predicted accuracy for the other
algorithms. Naive Bayes algorithm didn’t work as we expected.
In future work, it is possible to extend the research by using different classification techniques.
Moreover, the prediction of stroke can be done by adding some non-stroke data with the
existing dataset.
REFERENCES
[1] S. H. Pahus, A. T. Hansen, and A.-M. Hvas, “Thrombophilia testing in young patients with
ischemic stroke,” Thrombosis research, vol. 137, pp. 108–112, 2016.
[3] L. T. Kohn, J. Corrigan, M. S. Donaldson, et al., To err is human: building a safer health
system, vol. 6. National academy press Washington, DC, 2000.
[4] R. Jeena and S. Kumar, “Stroke prediction using svm,” in 2016 International Conference on
Control, Instrumentation, Communication and Computational Technologies (ICCICCT), pp. 600–
602, IEEE, 2016.
[7] S. Y. Adam, A. Yousif, and M. B. Bashir, “Classification of ischemic stroke using machine
learning algorithms,” Int J Comput Appl, vol. 149, no. 10, pp. 26–31, 2016.
[8] A. Sudha, P. Gayathri, and N. Jaisankar, “Effective analysis and predictive model of stroke
disease using classification methods,” International Journal of Computer Applications, vol. 43,
no. 14, pp. 26– 31, 2012.
[9] G. Kaur and A. Chhabra, “Improved j48 classification algorithm for the prediction of
diabetes,” International Journal of Computer Applications, vol. 98, no. 22, 2014.
[10] I. H. Witten, E. Frank, M. A. Hall, and C. J. Pal, Data Mining: Practical machine learning tools
and techniques. Morgan Kaufmann, 2016.
[11] P. Sewaiwar and K. K. Verma, “Comparative study of various decision tree classification
algorithm using weka,” International Journal of Emerging Research in Management
&Technology, vol. 4, pp. 2278– 9359, 2015.
[12] K. A. Shakil, S. Anis, and M. Alam, “Dengue disease prediction using weka data mining
tool,” arXiv preprint arXiv:1502.05167, 2015.
[13] J. A. Alkrimi, H. A. Jalab, L. E. George, A. R. Ahmad, A. Suliman, and K. Al-Jashamy,
“Comparative study using weka for red blood cells classification,” International Journal of
Medical, Health, Pharmaceutical and Biomedical Engineering, vol. 9, no. 1, pp. 19–22, 2015.