A Novel Approach Towards Healthcare Using Identity Access Management and Machine Learning
A Novel Approach Towards Healthcare Using Identity Access Management and Machine Learning
https://doi.org/10.22214/ijraset.2022.44459
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
Abstract: Advances around the field of deep learning and cognitive computing have allowed mankind to look at and solve the
problems of the world in a completely new way. Early detection of some deadly diseases helps save millions of lives but still, it
has been observed that there seems to be no change in the way diagnosis of a particular disease takes place even in the 21st
Generation of Medical Health Care. The highlight of the reasons happens to be Lack of Trust, Lack of Awareness, and Lack of
Infrastructure. The health care industry seems to land blocked by their rigid methodologies and to this date, there is no
involvement of the patient and caretaker staff in the digitalization of healthcare. In this project, we will introduce a digital
platform for healthcare that aims to reduce the gap between a doctor, patient, and the caretaker staff. This platform is based on
data, that can assist surgeons, patients, and care teams throughout the patient journey by automating some of the critical
processes that are now done manually, including decision-making, post-surgery planning, tracking, and estimating recovery
time, smart disease detection models, and collecting patient feedback. An important aspect of this platform will be the disease
prediction models wherein we have prepared three different disease prediction models that will be able to detect if a user has that
specific disease or not without the consultation of the Doctor. Here we have talked about the adoption of Deep Learning and
Artificial Intelligence in today’s Healthcare scenario and the crucial role of delivering such applications to the user on a single
platform.
Keywords: Identity Access Management, Medical Image analysis, Convolution neural network, Disease detection algorithms,
ResNet-101, ResNet-50, Mask RCNN, Deep learning models.
I. INTRODUCTION
Nowadays, everyone has a smart gadget that links them to the internet, and here is where the speed of data transfer or data
availability comes into play. Many people who want medical services for minor inconveniences but are unable to travel for required
medical treatment might benefit from this digital method. Medical image analysis is one of those fields which have seen some
breakthrough research and provides applications such that will benefit millions of people. An important aspect to focus on Medical
image analysis is the algorithm on which it is based upon. Machine learning (ML) algorithms can be defined as programs that
understand the complexity of the task they are designed for and are able to perform better as and when they are exposed to more and
more data. Such machine learning algorithms were first introduced in 1960 by Arthur Samuel. These algorithms are designed
employing the logics of statistics and mathematics which make the deep learning model accurate and functional. To build such
accurate disease models it is desirable that these models work on well-designed neural networks. A disease prediction model's
building blocks are neural networks. They are used in a variety of financial services applications, ranging from forecasting and
market research to fraud detection and risk assessment. The fundamental benefit of a neural network is that it adds computational
capabilities to the model, reducing the need for human interaction in the model's operation [1].
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2792
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
We may also remark that this approach may be impractical when dealing with big volumes of data. As a result, automated disease
identification and diagnosis detection systems are being developed in order to save radiologists time.
In the realm of medical image analysis, innovations in the areas of artificial intelligence techniques for the classification,
segmentation, and grading of different malignancies using various imaging modalities have lately grown more prominent. Database
operations such as feature extraction and data augmentation are a few methods related to data pre-processing that aim to classify,
filter and clean the dataset for better operations. In order to carry out the comparative study, the research has been carried out on a
number of existing models that have been suggested and developed by various researchers over time. In order to segment brain
tumours, one of the proposed models employs the convolution neural network (CNN) method and the fuzzy c-mean strategy. Their
model had a 97% per cent sensitivity rate and a 96.97% accuracy rate. They extracted four distinct features which symbolize the
different properties of each image, which in this case is an MRI scan of the brain, utilizing the four angles (0°, 45°, 90°, and 135°)
using this method. Meningioma-glioma dataset (Mg-Gl), meningioma-pituitary tumour dataset (Mg-Pt), Glioma-Pituitary tumour
dataset (Gl-Pt), Meningioma-glioma-pituitary tumour dataset (Gl-Pt), Meningioma-glioma-pituitary tumour dataset (Mg-Gl-Pt). The
four datasets Mg-Gl, Mg-Pt, Gl-Pt and Mg-Gl-Pt were donated by China's school of biomedical engineering [2], [3]. A deep
convolution neural network-based framework for brain tumour recognition and reviewing was introduced. The notion of fuzzy c-
means (FCM) was used for brain division, and these sectioned areas and form highlights were eliminated before being fed into
support vector machines and convolution neural network classifiers. The performance metrics projected that the framework was able
to accomplish a value of 97.5% accuracy [4]. Later on, another system was proposed in which a strategy that uses the area of
interest augmentation and fine ring-form partition to improve the efficiency of the brain tumour classification procedure. They
utilized comparable feature extraction approaches, such as the bag-of-words (BoW), which involves feeding these feature vectors
into a classifier. The accuracy of the and BoW and other feature extraction methods improved from 88.92% to 90.98%, respectively,
according to the experimental findings. Another study used a novel approach of the convolution neural network for non-invasive
segmenting and classifying glioma brain tumours. The categorization was completed using a complete scan of the MRI images of
the brain therefore the picture marker was not at the pixel level, but rather at the image level. The final metrics obtained from the
experiments revealed that this approach was successful with an inexpensive performance with an accuracy of 90.36% [5]. Sajjad,
Muhammad, et al. [6] for brain tumour classification, researchers investigated a system that was used via various dataset operations
for pre-processing approach with the CNN. The approach employed segmented MRI scans of the brain to classify brain cancers into
many grades. For classification, they employed the pre-trained VGG-19 CNN architecture, which has an accuracy of 87.58% and
90.47% for data bedsore pre-processing and after pre-processing. While Özyurt, Fatih et al. [7] for brain tumour classification
created a method, which combines CNN methods with neutrosophic and expert maximum fuzzy entropy. For brain tumour
segmentation, they utilized the neuromorphic set and expert maximum fuzzy-sure entropy methods, and then these pictures were
given to the CNN to extract features, and then to the SVM classifiers, which is a machine learning algorithm, for further
categorization. They achieved a mean success of 94.68%. In this bibliographic review paper, we have gone through close to 34
research papers from the Scopus directory where we have inferred the following observations which will prove to be useful in our
proposed system. According to a study published on the subject of fusion and extraction of features from a deep neural network, A
strategy was proposed that uses fusion attribution to better describe pictures for face recognition using deep CNN attribute
extraction. They utilized principal component analysis to decrease the fused attribute's capacity. For two classes, the SVM machine
classifier is used. This method can detect faces with extreme occlusion, substantial confusion, and size discrepancies, according to
test results. On the face detection dataset and benchmark, this approach achieves an 89% recall rate, according to the conclusion,
and was also found to be 97% accurate [8]. Er-Yang Huan et al. [9], CNN-based body constitution’ recognition system that can
recognize individual constitution’ types which is basically based on face scans. The suggested model first extracts the facial picture
ascribed with CNN, then combines the preoccupied highlights with the tone credits. To obtain the gathering result, the combined
subtleties are sent to the Soft-max classifier. They claim that such a method suggested during this research can achieve an accuracy
of 65.3%. A new and innovative method had also been introduced in which a cycle of functions was designed using a fuzzy c-means
collecting technique, conventional computations, and CNN, extract brain tumours from a 2-D MRI brain. The observational
research was focused on a real-time dataset containing a variety of malignant growth measures, spots, patterns, and image quality.
Six standard classifiers were used in the old-style calculation area, including SVM, k-nearest neighbours, multilayer perceptron,
logistic regression, nave Bayes, and random forest, which were all used in scikit-learn. CNN received a 97% efficiency rating [10].
The research first explains the most often used processes in paragraph attribute extraction, then expands on the frequently used DL
process in paragraph attribute extraction and its implementation, and anticipates the application of machine learning in feature
abstraction. They conclude that associated with other machine learning approaches.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2793
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
From nearly unprocessed original data, Deep Learning (DL) can detect complex interactions from the characteristic and train lower-
level characteristics [11]. The research uses learning neural networks to classify brain cancers by Heba Mohsen et al. [12], they used
a Deep Neural Network (DNN) to classify a batch of 66 MRI pictures of brain tumours. In terms of effectiveness, they find that the
DNN approach beats conventional classifiers. A convolutional network is used for grouping and segmentation in an efficient and
effective method. For abstract characteristics, the proposed method used ImageNet. For grouping and segmentation, the results were
97% and 84% precise, respectively [13]. DL structures and base neural frameworks for disease ordering by MRI pictures are
thought of and assessed. The results reveal that the framework routine based on the neural network's specificity and sensitivity
outperformed Artificial Neural Network (ANN) by 19% [14]. A new approach that uses CNN to classify brain tumours into benign
and three types has been proposed. Using an enhanced independent component analysis composite model, the tumour is
predominantly segmented from MRI images. Features are extracted and placed after the image has been segmented. This study
looks at the statistical features of brain tumours including structure, texture, and signal intensity to see if they might predict
treatment benefit like tumour existence and treatment response. Many conclusion studies have been conducted to investigate the role
of CNNs in segmenting brain tumours by first conducting an enlightening look into CNNs and then doing dissertation research to
obtain an example segmentation pipeline. Also, to look into the long-term efficacy of CNNs by looking into a new field called
radionics. This research examines the quantitative characteristics of brain tumours such as form, texture, and signal intensity in
order to predict clinical outcomes such as the presence of tumours and treatment response [15]. Research suggests that the detection
of a brain tumour by applying CNN and ANN classification in a sequential way. To create a more detailed architecture, small
kernels and neuron weight were devised. The CNN records 97% accuracy with minimum difficulty, according to research findings,
and therefore the latest techniques [16]. After inferring from the confusion matrix and results from the algorithm the Network
records 74% accuracy. Using a convolutional network within three kernels, the auto differentiate approach is used to identify cancer.
The approach simultaneously accomplished the initial identification of the whole core and enhanced regions in dice likeness and
quantity metrics by 0.85, 0.81, and 0.75, respectively [17]. Research conducted on the subject of DL and its role in covid-19 which
aimed at diagnosing and detection of coronavirus-19 disease through various radiology modalities such as x-rays and Computed
Tomography scans (CT scans). This model was able to provide was higher than 92% [18]. The CNN algorithm is used to extract
target properties from sonar images during the research of a CNN algorithm. The SVM is used in the recognition step and was
trained on data that was initially produced. The outcome illustrates the value of fully convolutional attribute extraction [19].
Applications of DL is a subject centred around how the actual applications of DL in medical are making a difference and saving
millions of lives. It talked about different techniques such as medical imaging, History of medical imaging, CNN, supervised
learning models and clustering [20]. At present a method for the CNN calculation and data augmentation and picture preparing to
sort mind MRI filter pictures into threatening and non-dangerous. Examining the results of the scraping CNN computation using
previously constructed VGG-16, ResNet 50, and Inception V3 models. In the end, the model's precision was 99.95 percent, while
VGG-16's was 95%. ResNet 50 had an accuracy of 87%, while Inception-V3 had an accuracy of 78% [21].
IV. METHODOLOGY
The objective of developing a detection model using machine learning algorithms is to assist physicians in detecting and identifying
diseases early on, which will benefit the treatment of health-related issues. A sample data for three diseases has been collected
separately and developed for the sake of better accuracy and more optimized code quality. The three disease detection models
include brain tumour detection, pancreatic tumour detection and covid-19 detection. All three models utilize different python
libraries and ML algorithms, hence we created three different models.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2794
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
Upon the development of the three detection models, we constructed a software architecture to make these three disease detection
models that were developed available to anyone on the internet via a hosted server. Every development of disease detection models
requires a set of mandatory steps before applying the model training and testing. They are namely data extraction, data pre-
processing and data normalization. In this section of the paper we will discuss the key aspects of the paper that are Identity Access
Management (IAM) and ML. Machine Learning for healthcare technology consists of algorithms that use self-learning neural
networks to improve treatment quality by assessing external data such as a patient's condition, X-rays, CT scans, and numerous tests
and screenings. IAM in healthcare should focus on managing all forms of identities, including users, privileged users, patients,
devices, and apps, as well as provisioning access to target systems and resources for sensitive data, such as Electronic Medical
Records (EMR), using fine-grained access restrictions. IAM is a genuine strategy to ensuring healthcare data remains confidential. It
manages access rights and establishes a password policy to give users secure access to medical documents, reports and other files in
healthcare IT systems. Fig. 1 shows the user flowchart for the proposed smart healthcare management system after which we discuss
the IAM and Disease Detection Models.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2795
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
(a) (b)
Fig. 2: (a) MRI scan with a Normal Brain [23] (b) MRI scan with a Brain Tumour [23]
b) Mask RCNN for Brain Tumour Detection: The recommended approach is resilient against variations in size, shape, and over-
lapping tumour borders with general brain tissues even when there are MRI abnormalities such as noisy, bias field effect, and
various acquisition angles. We offer an automated approach for increasing the resilience of brain tumour recognition and
segmentation in this work, which makes use of the Mask RCNN [24]. Mask RCNN is a technique for image segmentation that
reliably recognize items in an image and creates an increased segmentation mask to every occurrence. Instance segmentation
and semantic segmentation are two types of segmentation available in Mask RCNN. Mask RCNN was used in the proposed
Model for instance segmentation since it aids in feature extraction and segmentation of each image instance. Mask RCNN is
indeed a Faster RCNN modification that works by combining an existent branch for bounding box detection with a branch for
estimating an object mask (Region of Interest). Mask RCNN beats all existing single model entries on every test and has
considerably fewer over-fit scenarios than Faster RCNN [25]. Skull removal to avoid detecting bones and background removal
by discovering extreme points in shapes has been done as data pre-processing steps for this model.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2796
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
c) Feature Extraction: Feature extraction is a research methodology in which a large volume of raw data is reduced to a smaller
number of well-managed classes for processing. The backbone network was used to extract useful details from the input of MRI
scans. A branch of the Mask RCNN is dedicated to classification and boundary-based regression. Because ResNet 101 can
generate a convolution neural network with 101 deep layers, it is used to extract information from a picture. When we apply
ResNet 101 instead of the standard CNN model, we gain a 26% relative improvement. When a CNN model was used on larger
and deeper datasets, it highlighted a concern about deterioration, as the accuracy became saturated as the depth is increased.
The region of interest is created utilizing the proposed network for the region proposed network (RPN). The area recommended
network suggested network does have advantage of being able to identify objects on any dataset, which makes it useful for
model end-to-end training. A 3 by 3 convolution operation scans the picture pixel by pixel to give relevant inputs that represent
the bounding box with varying widths and are spread throughout the image. There are around 20 thousand anchors with various
scales and sizes that relate to one another to cover the image. To identify if an anchor includes the object or perhaps the
background, binary segmentation is employed. The BBR produces bounding boxes based on the value of the Intersection-over-
Union (IoU). Positive anchors (FG class) have an IoU larger than 0.7 with a ground-truth (GT) box, while negative anchors
have an IoU less than 0.7 [26].
d) Bounding Box Regression and Classification of Region of Interest (RoI): The input to the network that has been developed is
given as feature mapped image of dataset and proposed RoI which was generated from previous section. The two classes that
the specific input image will be classified into will be tumour and no tumour, this network increases the bounding box's area
even further. The bounding box helps in locating and measures the size of the tumour, with the help of the bounding box
regression we can further refine our results which further assists in encapsulating the tumour region. We'll get a feature map out
of this, which will be downward sampled k times from the original picture size using convolution. This is done to avoid the
incident of coincidence of RoI granularity with the feature map. The RoI align layer normalizes feature maps by getting the
fixed size of key point vectors for arbitrarily defined potential areas and conducting bi-linear interpolation to address
misalignment issues that arise when the RoI pooling layer uses the quantization operation. To acquire the final recognition
results, these feature points are categorized and regressed in layers of regression and classification.
e) Segmentation Mask Acquisition: From here, we'll develop the model's classification and regression branches before moving on
to the mask branch. The RoI classifier's outputs are used as input in this segmentation network, and the result is a segmentation
mask with a precise resolution of 28 28 pixels. This mask of 28 28 pixels contain more information over binary masks due
to the presence of floating numbers. During the training stage, the ground truth the masks have been reduced in size to 28 28
pixels to calculate the loss using the expected mask. During inference, the predicted mask is resized to fit the bounding box of
the RoI, resulting in the final output mask. The objective of segmentation is to find and segment the brain tumour in a
complicated backdrop without requiring operator intervention. Using the Mask RCNN, we hope to predict whether MRI scans
contain tumour or non-tumour regions. Below are three examples from the training set of segmented images of the tumour that
is lying underneath. The red portion signifies the tumour region of the MRI scan.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2797
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
2) Pancreatic Tumour: Pancreatic Tumour is considered one of the major causes of death in the globe. In the United States,
61,450 people are anticipated to be diagnosed with pancreatic cancer (32,960 men and 28,490 women). This type of cancer
accounts for around 3.01% of all cancers. This cancer is the eighth most frequent cancer among women and the tenth most
common cancer among males. CT scans is widely utilized in the investigation and detection of pancreatic cancers. The
detriment of X-ray is long tedious in the manual decision by a radiologist. Robotized classifiers can refresh the analysis action,
as far as both precision and time need [27]. This model that has been developed utilizes machine learning algorithms called
minimum distance classifier and CNN. Tumours of the pancreas are particularly difficult to detect since they are placed deep in
the abdomen and hidden behind a number of organs. Pancreatic cancer has the lowest 5-year survival rate, which is about 9%.
As a result, it is critical to discover cancer tumours at an early stage so that the patient can receive correct diagnosis and
treatment, and humanity can prevail over this devastating and fatal disease. A few symptomatic methods, like as imaging
studies and blood tests, may be used to identify whether there is a tumour in the pancreas. Understanding the tumour’s stage
(severity) is crucial to selecting the optimal treatment. A CT scan can help you decide whether surgery is the best option for
you. When the pancreas is covered by a smaller portion of the abdomen, early detection becomes difficult, and using a detection
and segmentation model becomes problematic [28]. Using a convolution neural network model, we provide a unique technique
for training and identifying tumours from pictures in this study.
a) Dataset: The dataset is made up of 1500 CT scans that were gathered from the website medical decathlon and pre-processed
utilizing image processing techniques like denoising and augmentation. The dataset is made up of CT scan images of tumours
and non-tumour scans that are fed into the algorithm as input. These photographs were originally in 3-D format (.nii file
extension), but we converted them to 2-D format (.jpg file extension). Image pre-processing is done to improve the quality of
the dataset such as removing the mean RGB value and data augmentation, which is very useful in medical picture analysis, such
as collecting random patches from the original image and horizontally flipping them in the image. The training and testing sets
were separated from the rest of the dataset. The training set included 70 percent of total of the pictures, which totalled 1000,
while the testing set included 30 percent of the images, which totalled 500.
(a) (b)
Fig. 4: (a) CT scan with a Normal Pancreas[29] (b) CT scan with a Pancreatic tumour [29]
b) Feature Extraction: The truth is that CNNs offer automated feature extraction, which is their main benefit. During the training
stage, the CNN method comprises extraction of features and weight computation. A number of parameters must be changed
during back propagation, which reduces the number of connections inside the neural network architecture. The CNN feature
extractor is made up of several types of neural networks that determine the weights throughout the training phase. An epoch is a
machine learning phrase that refers to how many passes the machine learning algorithm has made across the whole training
dataset. Batches are commonly used to organise huge data collections [30]. The number of epoch we have trained during the
algorithm development were 15 and the training and testing of a deep learning network's process takes longer in general. During
the training stage, this instructs the system to learn based on the classes defined in the image. Once the system has learned how
to categorize data based on the attributes provided, it may assign the test data to one of the classes. Because CNNs can
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2798
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
automatically create characteristics from dataset and frequencies represent images, they are most often used in healthcare
systems. Following that, these features are put into a classifier network, which performs classification and regression. In the
employed CNN model, we trained a CNN to discriminate pancreatic cancer from healthy pancreases using contrast enhanced-
CT images of patients. Neural networks use a hierarchy of neurons with activation functions and variables to extract and
synthesize information from images and construct a model that represents the intricate relationship between visuals and
diagnosis. CNN offers the potential to develop machine detection and diagnostic procedures for pancreatic cancer to help
radiologist interpretation. In a convolutional neural network, the first layer is the convolution layer, which applies filters to the
top image or new feature regions. The bulk of such user-specified characteristics are situated here in the network. The most
important qualities are the quantity of kernels and the size of the kernels. Pooling layers, which are similar to convolutional
layers but have a particular purpose, like max pooling, that gets the greatest value in a filter area, or average pooling, which
takes the average value in a filter area, are the second type of layer in a deep CNN. These are commonly used to reduce the
complexity of a network. The fully connected layers, the third type of layer in a deep CNN, can be employed to smooth the
findings before classification and are placed just before CNN's implementation. So, the first convolution layer in a CNN learns
fundamental characteristics like detection filters like edges, corners, and so on. The pooling layers learn the filters that identify
distinct sections of the objects, in our instance kidney, abdomen, liver, and so on. After that, the fully linked layer provides
better representations of object recognition from within the picture [31].
c) CNN for Pancreatic Tumour Detection: In medical image analysis, image classification has emerged as a major approach for
early detection and prediction. We have attempted to devise a novel way to building a tumour detection model combining
image pre-processing techniques and CNN model architecture in this proposed model. The CNN model architecture is used to
train and test the data in order to distinguish between tumour and non-tumour regions. The dataset is made up of CT scan
images of tumours and non-tumour scans that are fed into the algorithm as input. This was done for the system's picture
enhancement and pre-processing procedures. Because of its versatility and precision, CNN has proved to be highly beneficial in
the field of medical image classification. The convolution neural network consists of the pooling layers, neural layers and a
soft-max layer. Another important area to explore while creating a deep learning CNN model is the activation function. The
activation function is a non-linear change that we apply to the input before passing it to the next layer of neurons or converting
it to output. In this model, we have used the ReLU activation function. ReLU stands for rectified linear activation function,
which has become a linear activation function mostly employed in CNN’s for giving the output of the input directly, otherwise
the output will become zero. Models created with the ReLU activation function are easy to train and often achieves better
accuracy and confusion matrix.
d) CNN Model Result: Detecting pancreatic tumours using the abdominal CT scans becomes harder as pancreas are a smaller part
of the body. The CNN model was built on python 3.8 and the libraries used for this model training and testing was tensor-flow
version 2.7 and keras version 2.4. As the optimization algorithm, Adam optimizer is used to train the neural network. A neural
network that is created and trained to learn as per the classes defined with in picture may be used for classification. The two
main classes named in this system of dataset are tumour and no tumour. Once the system learns the classification done based on
the features given to it, it can then classify the test data into one of those classes. For classification of the tumour region, the
various organs will need to be identified. A certain abdominal CT scan consists of the following parts such as: pancreas, liver,
kidney, vertebra, stomach, spleen, fats, liquid and the lining. With the model being trained with the mean value of all these
areas are calculated. The classifier is given these numbers as thresholds for classifying distinct organs. Because it takes less
time to train and has higher accuracy, the sequential CNN model of neural network is adopted. The number of trainable
components in your network, or neurons that are affected by backpropagation, is referred to as trainable parameters. An epoch
is a period of measurement used only to train the neural network for a single cycle utilising all of the training data. We use all of
the data exactly once in each period. A forward and backward pass make up one pass: An epoch is made up of one or more
batches, each of which trains the neural network on a subset of the dataset. The total number of trainable parameters are
8,485,218, the number of epochs was 20 and the batch size of each was 35. The extracted features map seems to have a
different dimension of 224 × 224, and it is sent to a max-pooling layer. The training accuracy of the model to detect the tumour
is 92%. Some of the outputs of the model is shown below in Fig. 5.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2799
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
3) Covid-19 Detection: SARS-Covid, also known as Covid-19 started in Hubei, China, and soon spread throughout the world,
causing a worldwide epidemic. The reaction has been a confused mix of primarily turmoil with a little good faith thrown in for
good measure. Individuals all across the world secluded together to limit the spread of the sickness, with researchers hastily
sharing the pathogen's whole DNA. The obstacles to collaboration have been decreased by researchers. Despite this, the
epidemic has had several detrimental repercussions. Emergency departments have been overwhelmed as a result of the fast
contamination and lack of assets, which has caused significant worry among medical personnel. The overall number of reported
instances of the illness had surpassed 39,500,000 in over 180 nations as of November 2020, although the number of individuals
affected is most likely far higher. Covid-19 has claimed the lives of almost 1,110,000 people. This epidemic continues to put
clinical frameworks around the world to the test in a variety of ways, including rapid increases in requests for medical clinic
beds and basic shortages in clinical equipment, as well as the contamination of many medical care workers. As a result, the
limit in terms of rapid clinical decisions and persuasive utilization of medical services assets is crucial which is the reason why
a disease prediction model has been developed on this disease [32].
a) Dataset: We will employ the CT scans to create a covid-19 disease predictor model in the covid-19 detection model. Normal
and covid-19 positive CT scans are among the two types of Image Dataset Folders that are available in the A total of 2475 CT
scans were included in the SARS-covid CT scan dataset, including 1250 CT scans positive for covid-19 and 1225 CT scans for
individuals who were not diagnosed with covid-19. The dataset was initially meant to promote R&D in the field of artificial
intelligence systems that can determine whether a person is infected with covid-19 by analysing his or her CT scans. The
dataset was further extracted from Kaggle repository.
(a) (b)
Fig. 6: (a) CT scan with a Normal Diagnosis[33] (b) CT scan with a Covid-19 [33]
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2800
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
b) ResNet-50 for Covid-19 Detection: In this disease detection model, the illness Covid-19 is detected utilising a 50 layers deep
convolutional neural network ResNet-50. ResNet is regarded as a superior deep learning architecture since it is relatively simple
to optimise and achieves higher accuracy. Furthermore, there is always the issue of diminishing gradient, which is overcome by
employing the network's skip connections. The temporal complexity of the network grows as the number of layers in the deep
network architecture grows. The use of a bottleneck design can help to reduce this complexity [34]. As a result, we chose the
ResNet-50 pretrained model to construct our framework and excluded alternative pretrained networks with more layers. Resnet-
50 captures the most important aspects of an image and can be applied to similar and smaller datasets. This reusability feature
of a pre-trained model not only saves time, but it also saves money when the training dataset is limited. All photos in the
collection have been rescaled to 224 224 3 pixels. The goal of rescaling the image is to employ iteratively in the ResNet-50
model's numerous phases. Mean and standard deviation approaches are used to standardise the images in the ImageNet
collection.
c) Feature Extraction and Model Result: The network is initially fine-tuned by resizing the images. In addition, the ImageNet
dataset is always growing, resulting in a larger training set. Instead of manually separating image learning rates, the Cyclical
Learning Rate approach is used. This strategy is used to maximize learning rate optimization. Images from the input dataset are
scaled to 224 224 3 pixels. Using a discriminative learning rate for 50 epochs, the entire network is fine-tuned. It is usually
useful to train the model iteratively using the progressive resizing technique. With a total sample size of 32, the Adam optimizer
is utilized during training. The FastAI framework is important for data pre-processing, data augmentation, and, most crucially,
training [35]. The final accuracy achieved using this method was 88%.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2801
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
We will see the homepage, the login page and a brief walk through of the different disease prediction models. The major reason we
have performed our work surrounding these three diseases is their survival rates which are known to be very low and such work can
really make an impact in the society by saving lives. Early detection of these three diseases alone can save close to 600,000 lives
every year.
2) Fig. 9 shows the Screen-shot of the drop down menu for the Doctor portal and the working of the AI-powered chatbot that has
been deployed on the Web Application for the proposed healthcare management system as discussed in Fig. 1. This AI-powered
chatbot act like a digital assistant for the users. This chatbot is created in such a way that it can help the user navigate around
the website, it can turn out to be extremely useful for a first-time user. This portal will be used by registered doctors to save
their documents, keep track of their patients' progress, and so on. The user will have access to conventional measures such as
changing passwords and using two-factor authentication to ensure the security of their account. The doctor portal features a
function that allows the doctor to send tailored messages to the patient, checking on his recuperation and receiving timely
information, with the goal of bringing the patient and doctor together on a single platform. This will allow the Doctor and the
Patient to communicate more effectively and understand each other. Surgery planning is another important aspect of this
undertaking. The doctor can reserve an Operating Theatre or a Surgery Room using the Doctor Portal for a specific time period
and dates. In addition, the Doctor can also beforehand request for a support team for the surgery. This support team will consist
of a surgeon's assistant, an anaesthetist, a circulating nurse and a surgical technologist.
Fig. 9: Screen-shot of the drop down menu for the Doctor Portal
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2802
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
3) Fig. 10 shows the Screen-shot of the drop down menu for the Emergency SOS services as discussed in Fig. 1. This emergency
SOS services provides features such as an emergency appointment, and emergency doctor. An emergency doctor is available 24
hours a day, seven days a week on a zoom call to assist the patient in emergency situations. Another aspect of the Emergency
SOS is that the user can skip lines and manual form filling by going, they can simply go to the website and check the status of
the availability of room, such as ICU or emergency room beds well in advance, as well as choosing a particular desired doctor
to consult. Besides the Emergency SOS, there is also menu for the feedback feature which has been made available on the smart
healthcare management system as discussed in Figure 1. Here the user, simply by filling a short survey can drop feedback’s of
the website including User Experience and User Interface. It would help entail their level of contentment with the product, and
can be proved useful for improvising the product.
Fig. 10: Screen-shot of the drop down menu for the Emergency SOS services
4) Fig. 11 shows the Screen-shot of the drop down menu for the Patient portal as discussed in Fig. 1. The patient portal is designed
so as to let the patient make complete use of the features provided on the platform such as booking an appointment, emergency
SOS services and also access to health record or reports. Every patient will be given a personalised dashboard where they will
be able to access all of their personal health specific information's. Personal information, concerned Doctor details, medical test
reports, X-rays, MRI scans, Illness description, thorough diagnosis of the patient, precautions, updated prescriptions, road to
recovery, and other directions given by the Doctor that the patient must follow will all be available on the patient's dashboard.
Fig. 11: Screen-shot of the drop down menu for the Patient Portal
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2803
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
5) Fig. 12 shows the Screen-shot of the Homepage for the proposed Smart Healthcare Management System along with the drop
down menu for the Disease Detection Models. The disease detection models have been built based on three diseases which are
namely, Brain Tumour, Pancreatic Tumour and Covid-19. Once the user clicks on the choice of disease detection model he/she
wants to use they will be taken to the next page where the chosen disease prediction model has been deployed.
6) Fig. 13 shows the header page for Brain Tumour Disease Detection. Once the user has logged in, on the right hand top of the
menu the user can navigate to the disease detection model of his/her choice from the three.
Fig. 13: Screen-shot of the Brain Tumour Disease Detection Header Page
7) Fig. 14 shows the page for Brain Tumour Detection, the user needs to scroll down to the Screen-shot shown below. Over here
there is a brief description about Brain Tumour and the user can use the “choose file” button to navigate to the image in their
local machine and can select the scanned image in “JPG” or “PNG” format. Once the file has been selected from the local
machine and now the user will need to press the submit button, which would trigger the trained model and then with high
accuracy display the results on the next page automatically.
Fig. 14: Screen-shot of Selecting the File page for or Brain Tumour Disease Detection
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2804
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
8) Fig. 15 shows the result page for Brain Tumour Disease Prediction. This page will display the result that has been predicted in
this case weather the uploaded scan has been diagnosed with a Brain tumour or not. The model that has been deployed on the
server for Brain Tumour Detection has been developed using the Mask RCNN methodology and has an accuracy of 91% as
discussed in section 4.2. Also, in cases when the report is Positive, there would also be a list of doctors which the user can
consult immediately for specialist opinions, for the sake of prototype a random list from our self-assembled database has been
displayed.
Fig. 15: Screen-shot of the Predicted Output for Brain Tumour Disease Detection
9) Fig. 16 shows the header page for Pancreatic Tumour Disease Detection. Once the user has logged in, on the right-hand top of
the menu the user can navigate to the disease detection model of his/her choice from the three. In the next few Screen-shots, we
will see the working of the Pancreatic Tumour Disease Detection using Deep Learning. It follows similar steps as to Brain
Tumour.
Fig. 16: Screen-shot of the Pancreatic Tumour Disease Detection Header Page
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2805
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
10) Fig. 17 the body of the page for Pancreatic Tumour Disease Detection. From the above Fig. 16 the user needs to scroll down to
the Screen-shot shown below. Over here there is a brief description about Pancreatic Tumour and the user can use the “choose
file” button to navigate the image in their local machine and can select the scan image of that particular disease in “JPG” or
“PNG” format. Once the file has been selected from the local machine and now the user will need to press the submit button,
which would trigger the trained model and then with high accuracy display the results on the next page automatically.
Fig. 17: Screen-shot of Selected File for Prediction in Pancreatic Tumour Detection
11) Fig. 18 shows the result page for Pancreatic Tumour Disease Detection. This page will display the result that has been predicted
in this case weather the uploaded scan has been diagnosed with a Pancreatic Tumour or not. The model that has been deployed
on the server for Brain Pancreatic Tumour Detection has been developed using the CNN methodology and has an accuracy of
92% as discussed in section 4.2. Also, in cases when the report is Positive, there would also be a list of doctors which the user
can consult immediately for specialist opinions, for the sake of prototype a random list from our self-assembled database has
been displayed.
Fig. 18: Screen-shot of Selected File for Prediction in Pancreatic Tumour Detection
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2806
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
12) Fig. 19 shows the header page for Covid-19 Disease Detection. Once the user has logged in, on the right hand top of the menu
the user can navigate to the disease detection model of his/her choice from the three. In the next few Screen-shots we will see
the working of the Covid-19 Disease Detection using Deep Learning. It follows similar steps as to Brain Tumour.
13) Fig. 20 shows the body of the page for Pancreatic Tumour Disease Detection. From the above Fig. 19, the user needs to scroll
down to the Screen-shot shown below. Over here there is a brief description about Pancreatic Tumour and the user can use the
“choose file” button to navigate the image in their local machine and can select the scan image of that particular disease in
“JPG” or “PNG” format.
Fig. 20: Screen-shot of Upload the Scan Page for Covid-19 Disease Detection
14) Fig. 21 shows the page for Covid-19 Disease Detection, once the file has been selected from the local machine and now the user
will need to press the submit button, which would trigger the trained model and then the results would be displayed with high
accuracy on the next page.
Fig. 21: Screen-shot of the Selected File for Prediction in Covid-19 Disease Detection
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2807
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
15) Fig. 22 shows the result page for Covid-19 Disease Detection. This page will display the result that has been predicted in this
case weather the uploaded scan has been diagnosed with a Covid-19 or not. The model that has been deployed on the server for
Covid-19 Disease Detection has been developed using the ResNet methodology and has an accuracy of 88% as discussed in
section 4.2. Also in cases when the report is Positive, there would also be a list of Doctors which the user can consult
immediately for specialist opinions, for the sake of prototype a random list from our self-assembled database has been
displayed.
16) Parameters for performance evaluation: We have used parameters that are widely employed in the field of ML (Machine
Learning) and medical image processing. For classification, we employed three distinct machine learning methodologies: Mask
RCNN, CNN, and ResNet-50, and compared the results for each classifier. We employed Accuracy, F1 Score, and Precision to
assess the performance of disease detection models.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2808
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
17) Confusion Matrix of the Disease Detection Models: The confusion matrix of Brain Tumour has further been used for evaluating
the performance of the model. An matrix is used to evaluate the performance of a classification model, where N is the
number of target classes which in our case are 2. The matrix compares the actual goal values to the machine learning model's
predictions.
a) Fig. 23 shows the confusion matrix of the Brain Tumour Detection Model. The confusion matrix of Brain Tumour has further
been used for evaluating the performance of the model.
b) Fig. 24 shows the confusion matrix of the Pancreatic Tumour Detection Model.
Fig 24: Confusion Matrix for the Pancreatic Tumour Detection Model
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2809
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
c) Fig. 25 shows the confusion matrix of the Covid-19 Disease Detection Model.
VI. CONCLUSION
By providing a comprehensive and customised approach to delivering treatment, regardless of location, digital health can
revolutionise the patient experience and address access barriers. The foundation of this goal is the creation of a unified, data-driven,
pan-India health system. In this paper, we propose a way for digitalizing healthcare by unifying the healthcare industry and bringing
it on a single platform. Unifying the healthcare industry means bringing the clinics, hospitals, diagnostic services, pharmacy
services, and hospital admission services on one single platform. The features that have been developed on our platform such as
support in automating some of the critical processes that are now done manually, including decision-making, post-surgery
planning, tracking, estimating recovery time, and smart disease detection using deep learning. We employed a Hybrid Convolution
Neural Network (HCNN) to construct a disease prediction system that can deal with various diseases on a single server in this paper.
Physicians, radiologists, neurosurgeons, and other medical personnel will benefit from the suggested system. The accuracy predicted
by this model is designed to be higher than a normal neural network. This system can also be employed to reduce the diagnostic
costs and improve the accuracy of diagnosis. In countries such as India where the number of Doctor to Patient ratio happens to be
1.15 number of doctors per 1000 citizens, such models will come into a positive use as such systems improve the dependency we
have on the Doctor and allows the Doctors and Healthcare workers to pay higher attention to more complicated cases. It will also
improve the que time and help Doctors gain a second reliable opinion. This will in turn help in Time Management of the Doctor and
Patients. Such a system is a breakthrough in such times where we are highly dependent on technology and Artificial Intelligence.
There are some features which we planned to add but due to the hard time constraint were unable to integrate during the deadline.
There is a plan to enable a voice agent customer chatbot which will be helping the especially abled users to make it easier for them
to use the website and mobile app. The disease detection models needs to be more trained and able to diagnose a wider range of
diseases. We plan on consulting a few specialized Doctors who will be able to help us make disease detection models predict more
diseases and increase accuracy. For the prototype phase we have stressed on making this model hospital specific but we plan on
making it offer a wider range of medical services to a range of hospitals. The most comfortable shift that Indian physicians and
patients are experiencing is digitization. The internet's development, worldwide market penetration, and increased mobile phone
usage are all predicted to amplify this trend. Education and awareness on the usage of digital health can help to increase the number
of individuals who benefit from technology. Patients may better comprehend and engage in a discussion about their health data
using digital health solutions offered on secure online web applications, which can enhance results. These technologies' data can aid
providers in building a more complete picture of a person's daily health. Such platforms can really make a difference and improve
the healthcare scenario in India especially in rural India where there is a problem of lack of doctors and lack of infrastructure. This
is where an opportunity for such Healthcare Management Platforms shines and must be explored.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2810
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
REFERENCES
[1] Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen A.W.M. van der Laak,
Bram van Ginneken, Clara I. Sánchez, “A survey on deep learning in medical image analysis.” Medical Image Analysis, Volume 42, 2017, Pages 60-88, ISSN
1361-8415, DOI: 10.1016/j.media.2017.07.005.
[2] Heba Mohsen, El-Sayed A. El-Dahshan, El-Sayed M. El-Horbaty, Abdel-Badeeh M. Salem, “Classification using deep learning neural networks for brain
tumors.”, Future Computing and Informatics Journal, Volume 3, Issue 1, 2018, Pages: 68-71, ISSN 2314-7288, DOI: 10.1016/j.fcij.2017.12.001.
[3] S. Das, R. Aranya, N. N. Labiba, “Brain Tumor Classification Using Convolutional Neural Network.” 2019 1st International Conference on Advances in
Science, Engineering and Robotics Technology (ICASERT), 2019, Pages: 1-5, DOI: 10.1109/ICASERT.2019.8934603.
[4] N. Kumaravel, K. S. Sridhar and N. Nithiyanandam, “Automatic diagnosis of heart diseases using neural network.” Proceedings of the 1996 Fifteenth Southern
Biomedical Engineering Conference, Volume 1996, Pages: 319-322, DOI: 10.1109/SBEC.1996.493214.
[5] Alex Fornito, Andrew Zalesky, Edward T. Bullmore, Chapter 1 - “An Introduction to Brain Networks, Fundamentals of Brain Network Analysis.”, Academic
Press, Volume 2016, Pages: 1-35, ISBN 9780124079083, DOI: 10.1016/B978-0-12-407908-3.00001-7.
[6] Muhammad Sajjad, Salman Khan, Khan Muhammad, Wanqing Wu, Amin Ullah, Sung Wook Baik, “Multi-grade brain tumor classification using deep CNN
with extensive data augmentation.” Journal of Computational Science, Volume 30, 2019, Pages: 174-182, ISSN 1877-7503, DOI: 10.1016/j.jocs.2018.12.003.
[7] F. Özyurt, H. Kutlu, E. Avci and D. Avci, “A New Method for Classification of Images Using Convolutional Neural Network Based on Dwt-Svd Perceptual
Hash Function.” 2018 3rd International Conference on Computer Science and Engineering (UBMK), Volume 2018, Pages: 410-413, DOI:
10.1109/UBMK.2018.8566537.
[8] Xiaojun Lu, Xu Duan, Xiuping Mao, Yuanyuan Li, Xiangde Zhang, “Feature Extraction and Fusion Using Deep Convolutional Neural Networks for Face
Detection.” Mathematical Problems in Engineering, Volume 2017, Article ID 1376726, Pages: 9-10, 2017, DOI: 10.1155/2017/1376726
[9] Huan, Er-Yang & Wen, Gui-Hua & Zhang, Shi-Jun & Li, Dan-Yang & Hu, Yang & Chang, Tian-Yuan & Wang, Qing & Huang, Bing-Lin. (2017), “Deep
Convolutional Neural Networks for Classifying Body Constitution Based on Face Image.” Computational and Mathematical Methods in Medicine, 2017,
Volume 6, Pages: 1-9, DOI: 10.1155/2017/9846707.
[10] Bhandari & Abhishta, “Convolutional neural networks for brain tumour segmentation.” Insights into imaging, Volume 11, Page 77, DOI:10.1186/s13244-020-
00869-4
[11] Liang, H., Sun, X., Sun, Y., & Gao, Y. (2017). “Text feature extraction based on deep learning: a review.” EURASIP journal on wireless communications and
networking, 2017, Volume 1, Pages: 2-11, DOI: 10.1186/s13638-017-0993-1
[12] Ayachi R., Ben Amor N. (2009). “ Brain Tumour Segmentation Using Support Vector Machines.” Symbolic and Quantitative Approaches to Reasoning with
Uncertainty. ECSQARU 2009. Lecture Notes in Computer Science, Volume 5, Pages: 55-90, DOI: 10.1007/978-3-642-02906-6_63.
[13] Y. Xu, Z. Jia, Y. Ai, F. Zhang, M. Lai and E. I. Chang, “Deep convolutional activation features for large scale Brain Tumour histopathology image
classification and segmentation.”, 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Volume 2015, Pages: 947-951,
DOI: 10.1109/ICASSP.2015.7178109.
[14] L. Deng, “Artificial Intelligence in the Rising Wave of Deep Learning: The Historical Path and Future Outlook [Perspectives].” IEEE Signal Processing
Magazine, Volume 35, Pages: 180-177, DOI: 10.1109/MSP.2017.2762725.
[15] Hassan Ali Khan, Wu Jue, Muhammad Mushtaq, Muhammad Umer Mushtaq. “Brain tumour classification in MRI image using convolutional neural
network[J].” Mathematical Biosciences and Engineering, 2020, Volume 17(5), Pages: 6203-6216, DOI: 10.3934/mbe.2020328
[16] Pereira, A. Pinto, V. Alves and C. A. Silva, “Brain Tumor Segmentation Using Convolutional Neural Networks in MRI Images.” IEEE Transactions on
Medical Imaging, Volume 35, Issue 5, Pages: 1240-1251, May 2016, DOI: 10.1109/TMI.2016.2538465.
[17] Jeenal Shah, Sunil Surve, Varsha Turkar, “Pancreatic Tumour Detection Using Image Processing.” Procedia Computer Science, Volume 49, 2015, Pages: 11-
16, ISSN 1877-0509, DOI: 10.1016/j.procs.2015.04.221
[18] Sudhen B. Desai, Anuj Pareek, Matthew P. Lungren, “Deep learning and its role in COVID-19 medical imaging.” Intelligence-Based Medicine, Volume 3,
2020, Pages: 100-113, ISSN 2666-5212, DOI: 10.1016/j.ibmed.2020.100013.
[19] Y. Pan, “Brain tumour grading based on Neural Networks and Convolutional Neural Networks.”, 37th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC), 2015, Pages: 699-702, DOI: 10.1109/EMBC.2015.7318458.
[20] J. Ker, L. Wang, J. Rao and T. Lim, “Deep Learning Applications in Medical Image Analysis.” IEEE Access, Volume 6, Pages: 9375-9389, 2018, DOI:
10.1109/ACCESS.2017.2788044.
[21] Mohammad Rahimzadeh, Abolfazl Attar, “A modified deep convolutional neural network for detecting COVID-19 and pneumonia from chest X-ray images
based on the concatenation of Xception and ResNet50.” Informatics in Medicine Unlocked, Volume 19, 2020, 100360, ISSN 2352-9148, DOI:
10.1016/j.imu.2020.100360.
[22] Hossain, T., Shishir, F. S., Ashraf, M., Al Nasim, M. A., & Shah, “Brain Tumour Detection Using Convolutional Neural Network.” 2019 1st International
Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), Pages: 1-6, DOI: 10.1109/ICASERT.2019.8934561.
[23] B. H. Menze, A. Jakab, S. Bauer, J. Kalpathy-Cramer, K. Farahani, J. Kirby, “The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).”, IEEE
Transactions on Medical Imaging, Volume 34(10), 1993-2024 (2015), DOI: 10.1109/TMI.2014.2377694
[24] S. Albawi, T. A. Mohammed and S. Al-Zawi, “Understanding of a convolutional neural network.”, 2017 International Conference on Engineering and
Technology (ICET), Volume 2017, Pages: 1-6, DOI: 10.1109/ICEngTechnol.2017.8308186.
[25] Savita Ahlawat, Amit Choudhary, “Hybrid CNN-SVM Classifier for Handwritten Digit Recognition.” Procedia Computer Science, Volume 167, 2020, Pages:
2554-2560, ISSN 1877-0509, DOI: 10.1016/j.procs.2020.03.309.
[26] Uysal İlhan & Güvenir, Halil Altay, “An overview of regression techniques for knowledge discovery.” Knowledge Engineering Review, Volume 14, 1999,
Pages: 319-340, DOI: 10.1017/S026988899900404X.
[27] Farag, L. Lu, H. R. Roth, J. Liu, E. Turkbey and R. M. Summers, “A Bottom-Up Approach for Pancreas Segmentation Using Cascaded Superpixels and (Deep)
Image Patch Labeling.” IEEE Transactions on Image Processing, Volume 26, Issues 1, Pages: 386-399, Jan. 2017, DOI: 10.1109/TIP.2016.2624198.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2811
International Journal for Research in Applied Science & Engineering Technology (IJRASET)
ISSN: 2321-9653; IC Value: 45.98; SJ Impact Factor: 7.538
Volume 10 Issue VI June 2022- Available at www.ijraset.com
[28] Kriegsmann M, Kriegsmann K, Steinbuss G, Zgorzelski C, Kraft A, Gaida MM. “Deep Learning in Pancreatic Tissue: Identification of Anatomical Structures,
Pancreatic Intraepithelial Neoplasia, and Ductal Adenocarcinoma.” International journal of molecular sciences, Volume 22(10), Pages: 53-85. DOI:
10.3390/ijms22105385.
[29] Naito, Yoshiki et al. “A deep learning model to detect pancreatic ductal adenocarcinoma on endoscopic ultrasound-guided fine-needle biopsy.” Scientific
reports, Volume 11, Pages: 54-85, 2021, DOI:10.1038/s41598-021-87748-0.
[30] Zhao, ZhiYu, and Wei Liu, “Pancreatic Cancer: A Review of Risk Factors, Diagnosis, and Treatment.” Technology in cancer research & treatment, Volume 19,
2020, DOI:10.1177/1533033820962117.
[31] Jiang, Huiyan & Zhao, Di & Zheng, Ruiping & Ma, Xiaoqi, “Construction of Pancreatic Cancer Classifier Based on SVM Optimized by Improved FOA.”
BioMed Research International, Volume 2015, Pages: 1-12, DOI: 10.1155/2015/781023.
[32] Shorten, C., Khoshgoftaar, T. M., & Furht, B, “Deep Learning applications for COVID-19.” Journal of big data, Volume 8(1),2021, DOI: 10.1186/s40537-020-
00392-9
[33] M. Jamshidi, “Artificial Intelligence and COVID-19: Deep Learning Approaches for Diagnosis and Treatment.” IEEE Access, Volume 8, Pages: 109581-
109595, 2020, DOI: 10.1109/ACCESS.2020.3001973.
[34] Y. Oh, S. Park and J. C. Ye, “Deep Learning COVID-19 Features on CXR Using Limited Training Data Sets.” IEEE Transactions on Medical Imaging,
Volume 39, Pages: 2688-2700, 2020, DOI: 10.1109/TMI.2020.2993291.
[35] Aras M. Ismael, Abdulkadir Şengür, “Deep learning approaches for COVID-19 detection based on chest X-ray images.” Expert Systems with Applications,
Volume 164, 2021,Pages: 101-114, ISSN 095174, DOI: 10.1016/j.eswa.2020.114054.
[36] Proposed Healthcare Management System.
[37] Available at https://anrmedicus.com/
[38] Masood, M., Nazir, T., Nawaz, “Brain tumour localization and segmentation using mask RCNN.” Frontiers of Computer Science, Volume 15, ISSN 156338,
2021, DOI: 10.1007/s11704-020-0105.
[39] Chu, Linda & Fishman, Elliot., “Deep learning for pancreatic cancer detection: current challenges and future strategies.” The Lancet Digital Health, Volume 2,
Pages: 271-272, DOI:10.1016/S2589-7500(20)30105-9.
[40] Elpeltagy, Marwa, and Hany Sallam, “Automatic prediction of COVID- 19 from chest images using modified ResNet50.” Multimedia tools and applications,
2021, Pages: 1-13, doi:10.1007/s11042-021-10783-6.
©IJRASET: All Rights are Reserved | SJ Impact Factor 7.538 | ISRA Journal Impact Factor 7.894 | 2812