0% found this document useful (0 votes)
8 views5 pages

Document 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views5 pages

Document 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 5

Cybersecurity in Predictive Diagnostics:

Preventing Attacks on ML-based Healthcare Systems


1 Author’s Full name and ID, 2 Author’s Full name and ID,
st nd

3 Author’s Full name and ID, 4 Author’s Full name and ID and/or 5 Author’s Full name and ID
rd th th

I. Introduction

Machine learning is booming nowadays in every industry, particularly in healthcare, as it offers more
efficient results and better suggestions for treatment processes. These ML models rely on sensitive
patient data and complex algorithms, which expose them to various security threats. Unauthorized
access and data breaches can result in false diagnoses, serious health hazards, and a decline in patient
trust, which could harm hospitals' reputations and have legal repercussions (Goodfellow et al., 2015).

By evaluating enormous datasets of medical pictures, including X-rays, CT scans, and MRIs, machine
learning models are being used increasingly in the healthcare industry to diagnose diseases. These
models help in detecting patterns and suggesting medical decisions. However, their dependence on
sensitive data and the complexity of these models makes them vulnerable to attacks like adversarial
manipulation, data poisoning, and model extraction (Biggio & Roli, 2018).

This report focuses on the Cyber-attacks faced by ML-based healthcare systems. It examines
vulnerabilities within the system using an attack graph, explores attack methods such as adversarial and
model extraction attacks, and discusses defenses like adversarial training. By analyzing both attack and
defense, the report highlights the importance of secure ML deployment to ensure trust and reliability in
patient care.

II. Use Case Scenario and Background

“Use Case: Machine Learning-Powered Cancer Diagnosis

Machine learning is important in health care especially in Cancer diagnosis and treatment as they are
more accurate and erases the threat of human error which can cause threats to life as cancer is so
sensitive health issue which requires more attention while diagnosis and misdiagnosis which may result
loss of lives of patients.

As these ML systems are data driven they use the sensitive data of patients and being complex systems
they are exposed to security threats such as data breach and unauthorized access of the system or
manipulating the systems which may lead to wrong suggestions from the system causing severe loss to
health and reputation of the organization or hospital due to loss of patient trust and legal issues may
occur in case of patient's death

Data flow diagram


Level 0 Data Flow Diagram

Level 1 Data Flow Diagram

III. Analysis and Discussion

Attack graphs
Types of Attacks

1. Adversarial Attacks

Description: These attacks involve deceiving Machine learning model into making wrong
predictions by feeding the computer crafted data which is not detected by the system and leads
it to incorrect prediction

Attackers select the ML based system such as CNN for cancer diagnosis to exploit it he prepares
datasets with slight alteration in data which misleads the ml system to provide wrong prediction

Impact: They can significantly reduce the model’s accuracy and reliability causing hazard to
patient's health and safety.

2. Data Poisoning

Description: Attackers inject vicious data into the training dataset used to train ML systems
during the training phase of the system. The poisoned data misleads the systems learning
process results in faulty ML system as it learns patterns which not actually correct

Attackers gain access to the training dataset before the machine is trained and modifies its data
to cause biased predictions such as for a case where the patient have cancer can be predicted as
there is no cancer for patient which severely causes huge problem and hazard for lives

Impact: The model performs significantly worse and shows biased behavior, which can have
negative impact on machine in long run

2. Defenses Against ML and AI-Powered System Attacks

Types of Defenses

1. Adversarial Training

Description: adversarial training is a method used to increase the durability of ML systems


against the adversarial attacks by training the ML systems not only by the normal data but also
by the adversarial data so the machine can understand which data useful and which data is
faulty

this involves in steps of creating adversarial data to train the system with and train the data with
these data sets and test the durability of the system to the adversarial datasets and can improve
the system if required

This makes the system more reliable and robust to the real-world deployment as it cannot be
affected by the malicious inputs by the attackers

2. Data Sanitization

Description: it is the process of filtering the data before it is used to train the system which
helps in increasing the device accuracy as it removes adversarial inputs and malicious data
which compromises the device accuracy and security

This involves in removing the incomplete data from the data set which is used to train the data
and also detacts and removes outliers using statistical techiniques and ensuring the data set is
free of poisoned data

Sanitized data is often more fast in computation as they are more efficient

IV. Conclusions

Here, you briefly summaries the work carried out and suggest probable future work.

V. Team Contribution
Brief description of the distribution of work among the team members.

References

You might also like