0% found this document useful (0 votes)
24 views32 pages

Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis

Ethical considerations of AIMD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
24 views32 pages

Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis

Ethical considerations of AIMD
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 32

Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis

Attrayee Chakraborty1 , Mandar Karhade2


1
MS Regulatory Affairs, Northeastern University
2
Founder CEO, Citingale
[email protected], [email protected]
arXiv:2406.08695v1 [cs.CY] 12 Jun 2024

Abstract [FDA, 2024]. Key players in the field of AI-enabled devices


are located in the US, Canada, and Europe; with North Amer-
Artificial Intelligence (AI) is being adopted across ica being the major hub for such devices [Fraser et al., 2023]
the world and promises a new revolution in health- with 42.3% of the global market share [MMR, 2024]. As of
care. While AI-enabled medical devices in North May 2024, 97% of approved AI/ML-enabled devices (856 out
America dominate 42.3% of the global market, the of 882 approved AI/ML-enabled devices) followed the 510(k)
use of AI-enabled medical devices in other coun- pathway in the US showing the prevalence of 510(k) cleared
tries is still a story waiting to be unfolded. We aim AI/ML-enabled devices in this space [FDA, 2024].
to delve deeper into global regulatory approaches AI is being adopted across the world and promises a new
towards AI use in healthcare, with a focus on how revolution in healthcare [Rahman et al., 2024]. Legislation
common themes are emerging globally. We com- is picking up speed, as evidenced by the AI Index Annual
pare these themes to WHO’s regulatory consid- Report 2023 which notes 37 measures incorporating AI were
erations and principles on ethical use of AI for signed into law by 2022 as compared to just one in 2016.
healthcare applications. Our work seeks to take a Additionally, an examination of 81 countries’ parliamentary
global perspective on AI policy by analyzing 14 records on AI reveals that since 2016, the number of times AI
legal jurisdictions including countries representa- has been mentioned in international legislative proceedings
tive of various regions in the world (North Amer- has increased by almost 6.5 times [Nestor Maslej et al., 2023].
ica, South America, South East Asia, Middle East, This indicates that the global regulatory landscape around AI
Africa, Australia, and the Asia-Pacific). Our even- is highly dynamic at this point in time.
tual goal is to foster a global conversation on the We have attempted to cover the most recent developments
ethical use of AI in healthcare and the regulations (as of May 2024), however we acknowledge that this paper
that will guide it. We propose solutions to pro- may not reflect the most recent developments owing to the dy-
mote international harmonization of AI regulations namic nature of AI regulations. Figure 1 shows the countries
and examine the requirements for regulating gener- whose AI healthcare regulations are analyzed in this paper.
ative AI, using China and Singapore as examples of
countries with well-developed policies in this area.
2 Related Work
Global AI governance has been well-studied in the legal and
1 Introduction technical literature [Schiff et al., 2020; Daly et al., 2020;
Artificial intelligence (AI) needs no introduction: it is im- Walter, 2024], mainly in the context of principles of AI regu-
pacting the healthcare space in unprecedented ways [Rah- lation across countries. Walter introduces the notion of global
man et al., 2024]. AI is not a single, monolithic technology. AI regulation and governance in a sector-agnostic approach,
Instead, it encompasses diverse subfields, such as machine focusing on the socio-economic implications of the rapid ad-
learning and deep learning, which can be used alone or in vancement of AI technologies and difficulties in establish-
combination to create intelligent applications [Bajwa et al., ing effective governance frameworks. Our work, which is
2021]. Machine learning (ML), deep learning and natural in the context of global regulation of AI in healthcare, draws
language processing have been cited as the most used in di- inspiration from Walter to extend sector-agnostic AI gover-
agnosis and treatment recommendations, patient engagement nance to sector-specific AI regulation in healthcare. Our work
and adherence, and administrative activities [Davenport and elaborates on AI regulations mentioning healthcare across
Kalakota, 2019]. Between 1995 and May 2024, the FDA has a global perspective, examining 14 legal jurisdictions from
cleared more than 880 artificial intelligence (AI) medical al- different regions of the world namely, EU, UK, Australia,
gorithms. 151 AI-enabled medical devices have been added Canada, Japan, Italy, Brazil, Egypt, Rwanda, Saudi Arabia,
to the list of approved devices by the FDA as of this year Singapore, India, China and Hong Kong. While there is
[FDA, 2024]. AI-enabled medical devices approved by the a large amount of literature available on the global regula-
FDA primarily belong to the medical specialty of radiology tory policy and direction with respect to the US government’s
Figure 1: Map depicting areas of legal jurisprudence covered in the scope of this paper (indicated in green)

approach on AI in healthcare [Wang and Preininger, 2019; models (LLMs) like ChatGPT and the growing promise of
Chae, 2020], there is limited discussion on the status, direc- GenAI to transform clinical workflows, research and medical
tion or existing gaps in AI healthcare regulations for other key affairs [Viswa et al., 2024].
countries or regions beyond some brief mentions [Abdullahi
Tsanni, 2024]. Murphy et al. [Murphy et al., 2021] distinctly 3 Material and Methods
highlight the lack of research on AI ethics in Low- or Middle-
Income Countries (LMICs) and public health settings. They In this work, we have conducted a legal analysis of 25 pub-
emphasize the urgent need for further investigation into the licly available laws, guidance documents, and regulations is-
ethical implications of AI in these contexts to ensure its eth- sued in 14 legal jurisdictions (EU, UK, Australia, Canada,
ical development and implementation on a global scale. The Japan, Italy, Brazil, Egypt, Rwanda, Saudi Arabia, Singapore,
scope of our work has been selected keeping in mind this ob- India, China, and Hong Kong). The choice of nations under
servation, with the larger goal of increasing awareness and the scope of this paper has been made to capture a truly global
representation in conversations relating to the regulation of picture of regulation as elaborated in Section 21 .
AI in healthcare. We have aimed to incorporate a comprehensive range of
regulations related to AI in healthcare. However, currently,
There has been a myriad of works on the application of AI
the global regulatory landscape predominantly addresses the
in healthcare [Romagnoli et al., 2024; Goldberg et al., 2024;
use of AI in healthcare under the regulatory frameworks es-
Jiang et al., 2017; Ghosh et al., 2024]. Existing literature
tablished for medical devices, specifically Software as a Med-
discusses the requirement of ethical principles in AI gover-
ical Device (SaMD) [Palaniappan et al., 2024]. Also, most
nance and provides a high-level discussion of these princi-
current AI regulations prioritize healthcare but do not provide
ples in the context of AI in healthcare [Karimian et al., 2022;
healthcare-specific regulations [Reddy, 2023]. Therefore, we
Giovanola and Tiribelli, 2022; Lehmann, 2021]. This paper
have analyzed both sector-agnostic generic AI regulations
builds on existing literature to analyze laws, regulations, poli-
and healthcare-specific AI regulations, mostly in the medi-
cies, and guidance documents, within our scope, that demon-
cal device space. Our definition of sector-agnostic generic
strate alignment with the WHO’s key principles of ethical AI
AI regulation refers to regulations that govern the use of AI
regulation.
across various sectors and industries, without focusing on the
While applications of generative AI (GenAI) and its gover- specific applications or risks associated with AI in healthcare
nance have been discussed in existing literature [Meskó and specifically. These regulations provide a broad framework
Topol, 2023; Reddy, 2024; Jindal et al., 2024], our work for AI governance, addressing general principles and require-
extends this discussion to country-specific GenAI policies
(China and Singapore). This paper also touches upon current 1
See comment referencing [Murphy et al., 2021] and [Abdullahi
legislation on GenAI, given the explosion of large language Tsanni, 2024] in Section 2 of this paper.
ments that apply to AI systems regardless of their specific use consider the clinical setting.
case. The review also includes national policies in draft or AI models trained on unrepresentative data can perpetu-
implementation stages, developed by governments, agencies, ate and worsen existing health disparities due to societal dis-
and standard bodies. crimination or small sample sizes [Reddy et al., 2020]. In
This review considers a mix of four comparative param- clinical settings, AI systems must prioritize patient privacy,
eters: sector-agnostic generic AI regulations, healthcare- protect against harm, and ensure patients have control over
specific AI regulations, non-binding instruments, and bind- their data usage [Vayena et al., 2018]. Despite the promise of
ing legal instruments. The regulatory frameworks and guide- deep learning models in medical imaging and risk prediction,
lines for AI in healthcare across these 14 jurisdictions were their lack of interpretability and explainability poses signif-
identified and downloaded from their respective government icant challenges in healthcare, where transparency is crucial
healthcare websites for analysis in this review. The search for clinical decision-making [Char et al., 2018]. When se-
focused on key terms such as regulatory frameworks, legisla- lecting from multiple algorithms, it is crucial to evaluate risks
tions, laws, acts, strategies, policies, and guidelines. related to data quality and the suitability of the foundational
We examine quotes and provide references from each of data to new contexts, such as variations in population and dis-
these legal instruments to demonstrate how they align with ease patterns [Magrabi et al., 2019]. Therefore, evaluation
the WHO’s recommended principles for the ethical use of guidelines for AI systems should include assessing and col-
AI in healthcare. The principles include documentation and lecting evidence on data quality to prevent unintended conse-
transparency, risk management, intended use, clinical and an- quences and harmful outcomes [Magrabi et al., 2019].
alytical validation, data quality, and privacy. This alignment The following sections will elaborate on each of these prin-
allows us to identify how nations across the world are in- ciples and analyze how different countries are positioning
corporating WHO guidelines through their strategy, policy, themselves with respect to them.
and laws. We identify the legal clauses of jurisprudence tied
Documentation and Transparency
to WHO’s key principles of ethical AI regulation and re-
Transparency ensures that relevant stakeholders receive ap-
flect its application upon regulations. We also cross-reference
propriate information about AI systems [Dı́az-Rodrı́guez
publicly available work from international collaboratives and
et al., 2023]. This can be achieved through different levels of
technical focus groups on healthcare-AI regulations to show
transparency, including simulatability (human understanding
how their work has and will influence national policies on AI
of the model), decomposability (explaining model behavior
in healthcare.
and components), and algorithmic transparency (understand-
ing the model’s process and output) [Barredo Arrieta et al.,
4 Global Regulatory Landscape of AI 2020; Dı́az-Rodrı́guez et al., 2023]. The ability of AI to learn
4.1 Definitions independently from data poses a challenge when it comes to
explaining the decision-making rationale of some AI models
There is a lack of agreement on what is defined by AI [Krafft [Florian Königstorfer and Thalmann, S., 2022], posing prob-
et al., 2020]. While most nations define specific aspects of AI,
lems for their application in clinical settings [Smith, 2021].
such as AI systems [Dwivedi et al., 2021], there is an absence
Therefore, it is necessary to establish instruments and proce-
of a clear, widely recognized definition of AI. A notable ex-
dures for confirming that AI applications function as intended
ample is Japan’s acknowledgment of AI as being an ‘abstract’
and adhere to all applicable laws and regulations [Florian
concept and it is ‘difficult to strictly define the scope of artifi-
Königstorfer and Thalmann, S., 2022]. Appendix A (Table
cial intelligence in a broad sense’ [METI Japan, 2024], which
1) explains in detail how laws within the scope of this pa-
it rightfully is given that different kinds of AI have become
per address the WHO’s principle of documentation and trans-
specialized to particular use-cases, an example being GenAI
[Kanbach et al., 2023]. parency.
Per our analysis, the EU AI Act2 is one of the strongest
This ambiguity in AI definition has likely contributed to
acts declaring the requirement of technical documentation for
the field’s rapid growth and advancement [Peter Stone et al.,
high-risk AI systems to enable auditing, monitoring and en-
2016]. Figure 2 represents how AI is defined in different na-
suring reproducibility of AI outputs and processes.
tions.
A number of other regulations in other countries speak to
4.2 Common themes in global regulations the same principle (Table 1). Most laws in AI governance,
in healthcare and beyond, mention transparency and explain-
The common themes in global AI regulations have been ability as its requirements. However, the definition of trans-
outlined by the OECD [OECD, 2019]. Existing literature parency varies from ‘communication of appropriate informa-
[Reddy, 2023] discusses how general regulations on AI, while tion about an AI system to relevant people’ in the UK [De-
providing a broad framework, may not adequately address partment for Science, Innovation and Technology, 2023] to
the specific challenges of AI applications in healthcare. In ‘transparency of governance measures and systems used’ in
response to the growing country’s need to responsibly man- Brazil [Senate of Brazil, 2023]. Transparency is defined in a
age the rapid rise of AI health technologies, the WHO has more structured manner in the context of the healthcare sec-
responded to the need for frameworks on AI applications in tor by Canada, defining transparency as “the degree to which
healthcare [WHO, 2023b] as described in Figure 3. appropriate and clear information about a device (that could
The principles can be applied to the use of AI in healthcare
2
settings. To illustrate the relevance of these principles, let’s Chapter III, Article 11, EU AI Act, 2024
Figure 2: Table representing definitions of AI across nations
Figure 3: Key regulatory considerations as outlined by the WHO for ethical use of AI in healthcare
impact risks and patient outcomes) is communicated to stake- any. . . biases and errors. Careful design or prompt trou-
holders” [Health Canada, 2023]. bleshooting can help identify data quality issues early and
can prevent or mitigate possible resulting harm. Stakehold-
Risk Management
ers should also consider mitigating data quality issues and
The National Institute of Standards and Technology (NIST)
the associated risks that arise in health-care data, as well as
uses the definition of risk management as mentioned in ISO
continue to work to create data ecosystems to facilitate the
31000:2018 for AI systems: “Risk management refers to
sharing of good-quality data sources” [Salathé et al., 2018].
coordinated activities to direct and control an organization
Appendix A (Table 3) explains in detail how laws within the
with regard to risk” [National Institute of Standards and
scope of this paper address the WHO’s principle of data qual-
Technology (NIST), 2023]. The International Telecommu-
ity.
nication Union (ITU) Focus Group on Artificial Intelligence
As per our analysis, we find that Australia exemplifies
for Health (FG-AI4H) 3 elaborates on this thought by its
“data ecosystems” and “sharing of good-quality data sources”
recommendation of “a risk management approach that ad-
through its mention of the healthcare system and national in-
dresses risks associated with AI systems, such as cybersecu-
teroperability standards [University, 2023].
rity threats and vulnerabilities, underfitting, algorithmic bias
Japan and Rwanda also propose similar concepts. Japan
etc.” in the total product lifecycle of an AI system [Salathé
highlights an important concept of “converting data in a form
et al., 2018]. Appendix A (Table 2) explains in detail how
suitable for AI” and creation of “data economic zones” which
laws within the scope of this paper address the WHO’s prin-
will enable the use of AI for healthcare applications [Govern-
ciple of risk management.
ment of Japan, 2022]. Rwanda proposes an implementation
As per our analysis, risk management is being defined
plan for availability and accessibility to quality data through
across a spectrum by nations, with prescriptive guidance pro-
indicators such as size of open AI-ready data [Ministry of ICT
vided by Brazil on risk classification and risk impact as-
and Innovation, Rwanda, 2020].
sessment [Senate of Brazil, 2023] and Japan recommending
While data quality is essential for building accurate AI
“conducting audits in the AI utilization cycle” [Government
models, quality culture as an organization influences data
of Japan, 2022]. Risks linked to cybersecurity and privacy
management approaches [FDA, 2019]. UK has a similar
are highlighted by the UK [UK Government, 2021], while
approach as it speaks of using a data quality culture, ac-
pre- and post-market surveillance is highlighted in Canada’s
tion plans and root cause analysis to address data quality is-
approach towards medical devices [Health Canada, 2023].
sues at the source [UK Government, 2024]. The Framework
Rwanda [Ministry of ICT and Innovation, Rwanda, 2020] [UK Government, 2024] also speaks of data maturity mod-
and Egypt [for Economic Co-operation and (OECD), 2023]
els and metadata guidance to bring data quality to life. The
acknowledge AI risk assessment as a tool for responsible
European Health Data Space (EHDS-TEHDAS) data qual-
AI, while Singapore [(HSA), 2022] and India [of Medical
ity framework recommends more granular mechanisms of
Research (ICMR), 2023] have published technical guidance
data quality management [European Union, 2024]. Singapore
on process controls and change management. Saudi Arabia [(HSA), 2022], Hong Kong [HK Government, 2024] and In-
[Food and (SFDA), 2023] emphasizes involvement of a cross-
dia [of Medical Research (ICMR), 2023] also discuss quality
functional team for performing risk management.
of learning and training datasets for accurate validation.
Data Quality
Data quality is the extent to which a dataset satisfies the needs Intended Use, Analytical and Clinical Validation
of the user and is suitable for its intended purpose [Johnson The WHO points to the International Medical Device Reg-
et al., 2015]. While data quality issues can impact all mod- ulators Forum (IMDRF)’s definition of clinical evaluation
eling efforts, they are particularly problematic in healthcare [WHO, 2023a], which consists of valid clinical association,
[Hasan and Padman, 2006]. Data quality issues are particu- analytical validation, and clinical validation as quoted below:
larly challenging in healthcare due to the lack of standardized • “Valid clinical association: Is there a valid clinical as-
approaches for describing and handling such issues, the ab- sociation between your SaMD output and your SaMD’s
sence of a universal record storage model, the multitude of targeted clinical condition?
vocabularies and terminologies used, the inherent complex-
• Analytical validation: Does your SaMD correctly pro-
ity of healthcare data, and the ongoing evolution of medical
cess input data to generate accurate, reliable, and precise
knowledge [Simon and Aliferis, 2024].
output data?
The ITU FG-AI4H recommends that “developers should
consider whether available data are of sufficient quality to • Clinical validation: Does use of your SaMD’s accurate,
support the development of the AI system to achieve the in- reliable, and precise output data achieve your intended
tended purpose [Salathé et al., 2018]. Furthermore, devel- purpose in your target population in the context of clini-
opers should consider deploying rigorous pre-release evalu- cal care?”
ations for AI systems to ensure that they will not amplify On analysis of the jurisprudence within the scope of this
3
The Focus Group on Artificial Intelligence for Health (FG- paper, we found that national laws in this domain were lack-
AI4H) is a partnership of ITU and the World Health Organization ing. While there were technical guidance documents specific
(WHO) to establish a standardized assessment framework for the to AI applications in healthcare published in Canada [Health
evaluation of AI-based methods for health, diagnosis, triage or treat- Canada, 2023], Singapore [(HSA), 2022], Hong Kong [HK
ment decisions. Government, 2024], Saudi Arabia [Food and (SFDA), 2023],
and India [of Medical Research (ICMR), 2023], most of the is interesting to note, however, that the anonymization of data
technical documents by other agencies currently address AI does not guarantee privacy, with a study showing how people
as a subset of software and specific requirements are yet to be can be re-identified from an anonymized data collection by
updated. Appendix A (Table 4) explains in detail how laws providing their zip code, gender, and birthdate [Rocher et al.,
within the scope of this paper address the WHO’s principle of 2019].
intended use and clinical validation. Singapore [(HSA), 2022] emphasizes cybersecurity re-
The ITU FG-AI4H recommends the use of randomized quirements for connected medical devices, with focus on de-
clinical trials as the gold standard for evaluation of compara- sign controls, test reports, and traceability. Additionally, cy-
tive clinical performance, especially for the highest-risk tools bersecurity and privacy go hand in hand, an example being
or where the highest standard of evidence is required [ITU the UK’s ‘Plan for Digital Regulation’ [Department for Dig-
FG-AI4H, 2022]. It also associates documentation and trans- ital, Culture, Media & Sport (DCMS), 2022] and Saudi Ara-
parency with validation, mentioning training dataset compo- bia’s ‘Guidance on AI/ML based Medical Devices’ [Food and
sition and external analytical validation in an independent (SFDA), 2023] focusing on infrastructure security. Appendix
dataset. Currently, there are a number of international stan- A (Table 5) details how the jurisprudence in this paper relates
dards underway, such as ISO/IEC TC215 [ISOTC, 2024], to this principle.
IEEE P2802 [StanDict, 2024], and IEC/TC62 PT 63450
[IEC, 2024] which regulatory guidelines can later reference. 4.3 Regulations on AI in Healthcare
Singapore mentions the type of clinical evidence recom- Our analysis of the laws and regulations within the scope of
mended to support the clinical evaluation process for software this paper shows that countries are at different stages of de-
and AI-enabled medical devices, such as acceptance limits of veloping AI governance frameworks. Most nations are still
testing parameters [(HSA), 2022]. Saudi Arabia also notes in the strategy and policy stages of sector-agnostic generic
the absence of internationally aligned frameworks for clin- AI regulation, while healthcare agencies in specific countries
ical evaluation of AI/ML enabled medical devices and has have been seen to provide healthcare-specific guidance on AI
gone to reference the IMDRF recommendations, while spec- regulation.
ifying examples of metrics of clinical validation in intended Figure 4 provides a visual representation of the AI regu-
use environments, such as positive predictive value (PPV) and latory landscape in healthcare across the jurisdictions ana-
likelihood ratio negative (LR-) along with mentioning a value lyzed in this paper, illustrating the current state of AI pol-
greater than 0.81 as admissible for clinical validation [Food icy development in each region. Figure 6 provides a detailed
and (SFDA), 2023]. timeline of the laws analyzed in this paper and their relation
Another example of specific guidance in this domain is In- to AI regulation in healthcare. Figure 5 provides a holistic
dia’s recommendation of the use of Standard Protocol Items: overview of which jurisprudence in this paper is related to the
Recommendations for Interventional Trials–Artificial Intelli- WHO’s principles for regulation of AI in healthcare [WHO,
gence (SPIRIT-AI) and Consolidated Standards of Reporting 2023a](more information presented in Appendix A - Table 1,
Trials–Artificial Intelligence (CONSORT-AI) as frameworks Table 2, Table 3, Table 4, and Table 5).
for designing and running clinical assessment trials related As per our analysis, most documents discussed in this
to interventions with AI as a component [of Medical Re- paper touch upon WHO’s key principles, with internal ref-
search (ICMR), 2023]. Appendix A (Table 4) details how erences to international standards such as ISO and OECD.
the jurisprudence in this paper relates to this principle. Most documents discussed have been observed to converge
Privacy and Data Protection on WHO’s principles.
There have been a number of laws passed in the spirit of pri- The most active nations in AI regulation have been the US,
vacy and data protection, with the EU GDPR coming into ef- EU and China as per recent regulatory reports [Fritz, J. and
fect in 2018 [GDPR, 2018]. The GDPR’s data protection by Giardini, T., 2024]. However, our analysis shows that nations
design 4 [GDPR, 2018] are being echoed by other nations as in the Middle East and Southeast Asia are also picking up leg-
well, such as India’s proposed data privacy by design policy islation and policy centered around regulation of AI in health-
[Government of India, 2023]. Privacy impact assessments, a care. An example of this is India [Indian Council of Medical
popular approach for proactive privacy risk assessment and Research (ICMR), 2023], which calls out ethical principles
mitigation, are frequently included in privacy frameworks. for AI in healthcare. These principles include acceptable tests
Particular to health data, the European Health Data Space for clinical validation of AI used in healthcare and an ethics
(EHDS) [European Union, 2024] seeks to foster ownership checklist that covers participant recruitment methods used in
of healthcare data by individuals and builds further on the training models. Saudi Arabia has also taken a prescriptive
GDPR. approach through elaborating expectations and requirements
The WHO Global Strategy on Digital Health (2020–2025) of AI/ML based device manufacturers, such as clinical eval-
[World Health Organization, 2021] classifies health data as uation, risk management and quality management systems
sensitive personal data, or personally identifiable information, [Saudi Food and Drug Authority (SFDA), 2022].
that requires a high standard of safety and security. India’s African nations are picking up pace on framing policies for
ICMR guidelines [of Medical Research (ICMR), 2023] call AI regulation, with more focus on infrastructure development
out anonymization of data in line with the WHO strategy. It and user privacy. Rwanda has entered into contracts with
digital healthcare providers on AI-powered triage, symptom-
4
Articles 25 and 32 checking and cancer detection [AUDA-NEPAD, 2024]. More
details on the prescriptive nature of specific guidance docu- 5 Generative AI: The New Frontier
ments, laws, policies and regulations are provided in the sup-
plement (Tables 1, 2, 3, 4 and 5). 5.1 GenAI Regulation: Why Do We Need To
Regulate It Differently?
We observe that the WHO core principles [WHO, 2023a]
on AI regulation in healthcare have already been elucidated in Generative AI, unlike traditional AI, uses unsupervised learn-
various pre-existing standards for medical devices and phar- ing and generative models to create entirely new data that re-
maceuticals across nations: the principles of transparency, sembles training data [Hacker et al., 2023]. This makes it ex-
intended use, clinical validation, risk management and pri- tremely vulnerable to hallucinations, bias, and misuse [Fui-
vacy have been exhaustively talked about in standards such as Hoon Nah et al., 2023]. Neural network models, which are
ISO 13485:2016 and ISO 14971:2019 published previously. the core of GenAI, suffer from a lack of transparency and
The ITU FG-AI4H approach demarcates AI requirements for explainability, making it difficult to audit for biases and pri-
medical devices into general, pre-market, and post-market re- vacy violations [Salahuddin et al., 2022]. While AI gover-
quirements. This approach, which follows the structure of nance is picking up speed, regulations surrounding Gen AI
existing total product life cycle approaches for health ap- may need to be formed keeping these specific differences in
plications, demonstrates that AI/ML-enabled products have mind [Hacker et al., 2023].
both general requirements (like any other product) and AI- The rapid evolution of GenAI makes risk analysis a chal-
specific requirements that must be considered independently lenging topic when evaluating business potential, and thus,
[ITU FG-AI4H, 2022]. regulation difficult [TLR Health Europe, 2023]. In current
While new AI-specific legal instruments are emerging, use cases, most GenAI products are trained on structured
many countries are also incorporating AI regulation into ex- and unstructured healthcare data containing personal identi-
isting documents by addressing the additional requirements fiable information [Petrenko and Boloban, 2023]. Moreover,
necessary for AI. For example, Singapore, appended an ad- while GenAI has the potential to reduce the clinical adminis-
ditional section (Section 9) to its guidance on software as a trative burden on healthcare workers, inaccurate information
medical device (SaMD) [(HSA), 2022]. This may be an ef- can adversely affect patients [Harrer, 2023]. This underscores
fective stop-gap solution to regulate AI-enabled products in the need for clear regulation of GenAI in healthcare settings
[TLR Health Europe, 2023].
healthcare while more powerful AI laws are being developed.
The WHO’s paper on regulatory considerations on artificial
The convergence of opinion by most nations on regulation intelligence for health [WHO, 2023a] highlights how GenAI
of AI used in healthcare is a positive development, given the may already be violating the GDPR as summarized in Figure
differing opinions on generic AI regulation. For example, the 7.
EU takes a more proactive approach to AI regulation [Stahl As per the European Data Protection Board (EDPB), sev-
et al., 2022], whereas countries like Japan, South Korea and eral Supervisory Authorities have initiated data protection
Singapore are mostly prioritizing AI capability development investigations under Article 58(1)(a) and (b) GDPR against
and research [Radu, 2021]. In contrast, China has adopted OpenAI (developer of LLM called ChatGPT) in the context
a more “top-down” national strategy [Zeng, 2022]. Italy, on of the ChatGPT service [European Data Protection Board
the other hand, has been doubling down on privacy concerns (EDPB), 2024]. There has been a special task force desig-
as evidenced through its temporary ban on ChatGPT [Bolici nated for investigating how ChatGPT is positioned with re-
et al., 2024]. We believe that the divergent approach regard- spect to the principles of lawfulness, data collection, fair-
ing generic AI regulation can create a regulatory burden on ness, transparency, data accuracy, and subject rights [Euro-
companies using AI. Comprehensive, healthcare-specific AI pean Data Protection Board (EDPB), 2024]. The Australian
regulations are still needed [Reddy et al., 2020]. However, the Alliance for Artificial Intelligence in Healthcare (AAAiH)
current reliance on soft-law approaches [Palaniappan et al., also recommends communicating “the need for caution in
2024] allows for the flexibility and adaptability necessary for the clinical use of generative AI when it is currently untested
healthcare regulations to align with WHO guidelines [WHO, or unregulated for clinical settings, including the preparation
2023a]. of clinical documentation.” (Recommendation 4, AAAIH,
While many of the existing AI governance laws are over- 2024).
arching and cover multiple sectors including healthcare, spe- National and global regulatory bodies are struggling to
cific focus on regulating AI in healthcare is still a challenge keep pace with the rapid advancements in GenAI, as the
[Simon and Aliferis, 2024]. This is more relevant with the technology’s trajectory remains uncertain [TLR Health Eu-
rise of LLMs in healthcare, such as Med-PaLM, ChatDoctor rope, 2023]. Regulatory mechanisms for GenAI have been
and ClinicalBERT [Yang et al., 2023] which are at the fore- proposed advocating three layers of regulation: universal
front of medical diagnosis, treatment, patient education and technology-neutral regulation, regulation on high-risk appli-
clinical documentation. cations of GenAI rather than pre-trained models, and regula-
GenAI application in healthcare is expected to grow at a tion on information access [Hacker et al., 2023]. The chal-
CAGR of 35.14% between 2023 and 2032 [Precedence Re- lenge for regulatory authorities lies in anticipating the full
search, 2024], and over two-thirds of US physicians view scope of GenAI’s evolution and developing comprehensive
GenAI as beneficial in healthcare [Wolters Kluwer, 2024]. regulations that address its multifaceted implications [TLR
Regulation of GenAI used in healthcare requires a precise ap- Health Europe, 2023]. We have identified two nations (China
proach [Meskó and Topol, 2023]. and Singapore) with specific GenAI regulations [Luckett,
Figure 4: Visual representation of the current AI regulatory landscape in healthcare across 14 jurisdictions
Figure 5: Depiction of laws illustrating WHO’s core principles on ethical AI use in healthcare

Figure 6: Timeline presenting the evolution of AI legislation across 14 jurisdictions, categorizing each legislation as binding or non-binding
and sector-agnostic or healthcare-specific
Figure 7: Key instances of how large language models (LLMs) vio-
lated EU General Data Protection Regulations (GDPR)

2023; Soon and Tan, 2023] as discussed in Section 5.1 as Figure 8: Key regulations on Generative AI (GenAI) in China
representative examples of GenAI specific regulation.
jointly released the ‘Model AI Governance Framework for
5.2 Current Legislation on GenAI Generative AI’ [IMDA, 2024]. While this Framework fo-
Currently, we did not come across any legal jurisprudence cuses on the known topics of data quality, transparency, inci-
on GenAI used specifically in healthcare. However, there has dent reporting, security, safety and testing, it also focuses on
been some legal activity on GenAI as a whole. China and Sin- content provenance such as digital watermarking and cryp-
gapore are prominent examples of how GenAI-specific legis- tographic provenance [IMDA, 2024]. The collaboration has
lation has shaped its legal landscape. also proposed an ‘initial set of standardized model safety
evaluations for LLMs’ including domain-specific tests for
China medicine [IMDA, 2023a].
The Chinese government has been enacting a number of laws
to regulate GenAI. As shown in Figure 8, the common theme 6 Results and Conclusion
of these laws is their emphasis on regulating data from ille-
gal sources to train models, privacy and security, accountabil- In this paper, we analyzed 25 policy, strategy, and guid-
ity for content production, tagging GenAI generated content, ance based documents, laws and acts centered around AI
and complaint management [Wu, 2023]. A number of other in healthcare across 14 diverse legal jurisdictions and un-
standards have been released early in 2024 such as Draft stan- derscored a global drive towards responsible AI integration
dards for security specifications on generative AI pre-training within healthcare. While we analyzed most of the specific
and fine-tuning data processing activities (GenAI training regulation on AI in healthcare through non-binding instru-
data draft standards), Draft standards for security specifica- ments (6), this has both positive and negative consequences.
tions on GenAI data annotation (GenAI annotation draft stan- Non-binding approaches offer flexibility and can be easily
dards), and Basic security requirements for generative artifi- adapted to the evolving AI landscape. However, their vol-
cial intelligence service (GenAI standards) [Hurcombe and untary nature means organizations may choose not to adopt
Neo, 2024]. These standards also highlight protection of na- them.
tional security, intellectual property, and protection of indi- Our findings (Sections 4.2, 4.3, and Appendix A) highlight
vidual rights [Hurcombe and Neo, 2024]. It is noteworthy a shared commitment to aligning with the WHO’s ethical AI
that the Generative AI Measures apply extraterritorially, al- principles, indicating a promising trajectory for the future
lowing China to require non-compliant foreign generative AI of AI in healthcare. However, the variability in the specific
service providers operating in China to take necessary mea- strategies and the pace of adoption across regions emphasize
sures [Yan, 2024]. the need for ongoing international dialogue and cooperation
4.3.
Singapore Most regulations on AI broadly tackle fundamental prin-
Singapore had released its Model AI Governance Framework ciples common to most technologies (such as fairness, trans-
in 2019 to lay the groundwork for responsible use of AI. With parency, bias and privacy). Specific healthcare-centric AI reg-
the rise of GenAI, the AI Verify Foundation and Infocomm ulations have mostly been found to be proposed by specific
Media Development Authority of Singapore (IMDA) of Sin- healthcare regulatory bodies in the government, such as the
gapore released its ‘Discussion Paper on Generative AI: Im- FDA (US), Health Canada (Canada), MHRA (UK), ICMR
plications for Trust and Governance’ [IMDA, 2023b]. In re- (India) and others (Section 4.3). As explored in Section 4.2
sponse to the discussion paper, AI Verify and IMDA have and AppendixnA, we conclude how existing legislation con-
verges with WHO principles [WHO, 2023a]. We have iden- regulatory requirements thereby maximizing their potential
tified how emerging countries are also building requirements benefits for global health while mitigating potential risks.
as per WHO principles [WHO, 2023a] (Section 4.2). This ap-
proach has met our objective of focusing on global regulation A possibility of harmonization
(Section 3) and provides insights beyond existing literature The call for global harmonization of regulations–be it in phar-
literature (Section 2). maceuticals, biologics or medical devices–has been steadily
We have also analyzed regulations on 4 comparative pa- increasing within the industry over the years [Lindström-
rameters: sector-agnostic generic AI regulations, healthcare- Gommers and Mullin, 2019]. As highlighted in this paper, the
specific AI regulations, non-binding instruments and binding regulation of artificial intelligence (AI) in healthcare remains
legal instruments. We have examined the timeline of evo- in its early stages (Section 4.3), presenting a unique oppor-
lution of jurisprudence under the scope of this paper for 14 tunity for harmonization. The foundational principles out-
nations 6. lined by the WHO [WHO, 2023a] offer a promising frame-
To take a step further, we have discussed two examples of work for alignment. An example of this is the alignment of
countries framing policies around GenAI as GenAI promises the EU’s new AI Office, along with the UK’s AI Safety In-
to transform healthcare (Section 5). stitute, which could potentially interface to lead to a greater
degree of global harmonization of AI regulation. Another ex-
7 Future Directions ample is that of a first-of-a-kind international treaty adopted
by the Council of Europe (CoE): the Framework Conven-
We believe that regulations on AI in healthcare can develop tion on Artificial Intelligence and Human Rights, Democ-
as a three-pronged approach: racy and the Rule of Law (Convention) with 46 member
states with countries from all over the world being eligible
Collaboration by stakeholders to join it [Leslie et al., 2021]. Harmonization of regula-
We believe that regulatory bodies can refer to deliverables tions on AI in healthcare is yet to be seen, especially with
from focus groups such as the International Telecommunica- the rise of regulatory sandboxes and existing differences in
tion Union Focus Group on Artificial Intelligence for Health healthcare systems around the world [Leckenby et al., 2021;
(FG-AI4H). This particular group has published considera- Cancarevic et al., 2021]. However, given the diverse ap-
tions for manufacturers and regulators to conduct compre- proaches of individual countries, ranging from pro-innovation
hensive requirements analysis and streamlining conformity to pro-risk, achieving true harmonization may prove chal-
assessment procedures for continual product improvement in lenging [Thierer, 2023].
an iterative and adaptive manner [ITU FG-AI4H, 2022]. Such Experts are already concerned with the divergence in
technical guidance can ensure that specific considerations of sector-agnostic AI regulation and healthcare-specific regula-
AI in healthcare are addressed in regulatory discussions. A tions, an example being the EU AI Act and EU MDR being
number of international standards are under development at described as an “arranged marriage” and “conjoined twins”
the time of writing of this paper, such as ISO/TC 215 (Health [Regulatory Affairs Professionals Society (RAPS), 2024].
informatics) [ISOTC, 2024], ISO/IEC AWI TR 18988 (Artifi- Even within the AI space, a variety of definitions may lead
cial intelligence — Application of AI (technologies in health to greater confusion down the line. Since the regulation of AI
informatics) [for Standardization (ISO), 2024b] and ISO /CD is relatively new in the healthcare space, there is still time to
TS 24971-2 (Medical devices — Guidance on the applica- harmonize definitions, terminologies, and legislation related
tion of ISO 14971 Part 2: Machine learning in artificial in- to AI in healthcare. The next step in AI regulation would
telligence) [for Standardization (ISO), 2024a], to name a few be to issue healthcare-specific regulations and guidance that
which can be referred to by regulators and the healthcare in- resolve any inconsistencies between new and existing frame-
dustry. works.
The evolution of AI regulation with fast-paced changes
in technology [Digital Regulation Platform, 2024] can take Risk as the new focus
inspiration from the nature of regulations on drones, which As witnessed in the EU AI Act and WHO guidance on ethi-
evolved from being an unregulated technology to a highly cal use of LLMs in healthcare, a risk-based approach with a
regulated one in a short timeframe [Fenwick et al., 2017]. focus on accountability in different stages of the value chain
We can hope that regulators of AI will adapt to the fast-paced of development, deployment, and provision of AI systems is
nature of AI and develop sector-specific regulations in a short warranted [WHO et al., 2024]. The FDA’s recent inclusion
timeframe as well. of ISO 14971: 2019 as part of its updated Quality Manage-
By expanding global regulatory alliance and harmonizing ment System Regulations (QMSR) also echoes similar inten-
requirements, individual nation states can avoid regulatory tions in incorporating risk in systems [Kolton, 2024]. In our
blind spots, be more prescriptive about the expectations, and analysis, we note that multiple countries have mentioned risk
increase the speed of well-regulated, safe, and ethical innova- management and planning as key expectations (AppendixA).
tion. Including manufacturers in the process of harmonization It is of interest to see how AI validation tools help in con-
has also been called out by the WHO as a way to include all verting principles such as risk management to practice, with
stakeholders [WHO et al., 2024]. This approach will likely more than 230 tools for trustworthy AI spanning across the
reduce regulatory burden on manufacturers, healthcare sys- US and UK [Gunashekar et al., 2024]. Existing tools spe-
tems, and patients by decreasing avoidable variation in the cific to healthcare include Aival (for clinical users), Python
NLP (for biomedical literature), Google What-If (for analyz- In Tohid, H. and Maibach, H., editors, International medi-
ing model prediction changes with changes in dataset), and cal graduates in the United States, pages 45–67. Springer,
Optical Flow (medical imaging) to name a few [Gunashekar Cham.
et al., 2024]. OECD website can be a great starting point [Chae, 2020] Chae, Y. (2020). Us ai regulation guide: Leg-
for healthcare regulators to establish acceptable evidence pa- islative overview and practical considerations. The Journal
rameters and for industry members to validate AI systems of Robotics, Artificial Intelligence & Law, 3(1):17–40.
[OECD, 2024]. The validity of these tools is yet to be seen,
with studies revealing that AI auditing tools on the horizon [Char et al., 2018] Char, D. S., Shah, N. H., and Magnus, D.
may be questionable in their effectiveness [Graham et al., (2018). Implementing machine learning in health care -
2020]. addressing ethical challenges. The New England Journal
of Medicine, 378(11):981–983.
Ethical Statement [Daly et al., 2020] Daly, A., Hagendorff, T., Li, H., Mann,
M., Marda, V., Wagner, B., and Wang, W. W. (2020). Ai,
There are no ethical issues. governance and ethics: global perspectives. University of
Hong Kong Faculty of Law Research Paper, (2020/051).
Acknowledgments [Davenport and Kalakota, 2019] Davenport, T. and
We would like to extend our appreciation to AI-Global Health Kalakota, R. (2019). The potential for artificial in-
Initiative (AI-GHI) for providing the platform to connect with telligence in healthcare. Future healthcare journal,
diverse stakeholders. Recognized as an FDA Collaborative 6(2):94–98.
Community in 2019 and 2021-2024 (active), the AI-GHI [Department for Digital, Culture, Media & Sport (DCMS), 2022]
leverages their diverse background of stakeholders from med- Department for Digital, Culture, Media & Sport (DCMS)
ical device, pharmaceutical, biological, hospital, and research (2022). Plan for Digital Regulation: Developing an
sectors to identify the current and future needs of healthcare Outcomes Monitoring Framework 2022.
and provide guidance in how to navigate barriers and risks
around the implementation of AI/ML in all of healthcare. [Department for Science, Innovation and Technology, 2023]
Special thanks to Lacey Harbour for her invaluable help with Department for Science, Innovation and Technology
comments on this paper. (2023). Ai regulation: A pro-innovation approach. White
AC extends heartfelt gratitude to Susobhan Ghosh for his Paper.
invaluable assistance in formatting this paper in LaTeX and [Digital Regulation Platform, 2024] Digital Regulation
meticulous review of the formatting. Platform (2024). Digital regulation platform. Re-
trieved from https://digitalregulation.org/3004297-2/
References #post-3004929-endnote-ref-21.
[Abdullahi Tsanni, 2024] Abdullahi Tsanni (2024). [Dwivedi et al., 2021] Dwivedi, Y. K., Hughes, L., Ismag-
Africa’s push to regulate ai starts now. Retrieved from ilova, E., Aarts, G., Coombs, C., Crick, T., Duan,
https://www.technologyreview.com/2024/03/15/1089844/ Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V.,
africa-ai-artificial-intelligence-regulation-au-policy/. Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kiz-
gin, H., Kronemann, B., Lal, B., Lucini, B., and Medaglia,
[AUDA-NEPAD, 2024] AUDA-NEPAD (2024). Regula- R. (2021). Artificial intelligence (ai): Multidisciplinary
tion and Responsible Adoption of AI in Africa Towards perspectives on emerging challenges, opportunities, and
Achievement of AU Agenda 2063. agenda for research, practice and policy. International
[Bajwa et al., 2021] Bajwa, J., Munir, U., Nori, A., and Journal of Information Management, 57:101994.
Williams, B. (2021). Artificial intelligence in healthcare: [Dı́az-Rodrı́guez et al., 2023] Dı́az-Rodrı́guez, N., Ser, J. D.,
transforming the practice of medicine. Future Healthcare Coeckelbergh, M., López, M., Herrera-Viedma, E., and
Journal, 8(2):e188–e194. Herrera, F. (2023). Connecting the dots in trustworthy
[Barredo Arrieta et al., 2020] Barredo Arrieta, A., Dı́az- artificial intelligence: From ai principles, ethics, and key
Rodrı́guez, N., Del Ser, J., Bennetot, A., Tabik, S., Bar- requirements to responsible ai systems and regulation. In-
bado, A., Garcı́a, S., Gil-López, S., Molina, D., Ben- formation Fusion, 99:101896–101896.
jamins, R., Chatila, R., and Herrera, F. (2020). Explainable [European Data Protection Board (EDPB), 2024] European
artificial intelligence (xai): Concepts, taxonomies, oppor- Data Protection Board (EDPB) (2024). Report of the
tunities and challenges toward responsible ai. Information work undertaken by the chatgpt taskforce. Retrieved
Fusion, 58:82–115. from https://www.edpb.europa.eu/system/files/2024-05/
[Bolici et al., 2024] Bolici, F., Varone, A., and Diana, G. edpb 20240523 report chatgpt taskforce en.pdf.
(2024). Unpopular policies, ineffective bans: Lessons [European Union, 2024] European Union (2024). European
learned from chatgpt prohibition in italy. In ECIS 2024 health data space.
Proceedings, volume 11. [FDA, 2019] FDA (2019). Proposed regulatory framework
[Cancarevic et al., 2021] Cancarevic, I., Plichtová, L., and for modifications to artificial intelligence/machine learn-
Malik, B. H. (2021). Healthcare systems around the world. ing (ai/ml)-based software as a medical device (samd) -
discussion paper and request for feedback. Last Accessed: [Goldberg et al., 2024] Goldberg, C. B., Adams, L., Blu-
22 May, 2024. menthal, D., Brennan, P. F., Brown, N., Butte, A. J.,
[FDA, 2024] FDA (2024). Artificial intelligence and ma- Cheatham, M., deBronkart, D., Dixon, J., Drazen, J.,
chine learning (ai/ml)-enabled medical devices. Last Ac- Evans, B. J., Hoffman, S. M., Holmes, C., Lee, P., Man-
cessed: 22 May, 2024. rai, A. K., Omenn, G. S., Perlin, J. B., Ramoni, R., Sapiro,
G., and Sarkar, R. (2024). To do no harm — and the most
[Fenwick et al., 2017] Fenwick, M. D., Kaal, W. A., and Ver- good — with ai in health care. NEJM AI, 1(3).
meulen, E. P. M. (2017). Regulation tomorrow: What hap-
[Government of India, 2023] Government of India (2023).
pens when technology is faster than the law? American
University Business Law Review, 6(3). Available at: http: Personal data protection act.
//digitalcommons.wcl.american.edu/aublr/vol6/iss3/1. [Government of Japan, 2022] Government of Japan (2022).
[Florian Königstorfer and Thalmann, S., 2022] Florian Ai strategy 2022. Outline of Japan’s AI policies and ini-
Königstorfer and Thalmann, S. (2022). Ai documenta- tiatives.
tion: A path to accountability. Journal of Responsible [Graham et al., 2020] Graham, L., Gilbert, A., Simons, J.,
Technology, 11:100043. Thomas, A., and Mountfield, H. (2020). Artificial intel-
[Food and (SFDA), 2023] Food, S. and (SFDA), D. A. ligence in hiring: Assessing impacts on equality. Institute
(2023). Guidance on ai/ml based medical devices. Quality for the Future of Work.
management systems and documentation for AI/ML based [Gunashekar et al., 2024] Gunashekar, S., van Soest, H., Qu,
medical devices. M., Politi, C., Aquilino, M. C., and Smith, G. (2024). Ex-
[for Economic Co-operation and (OECD), 2023] for Eco- amining the landscape of tools for trustworthy ai in the
nomic Co-operation, O. and (OECD), D. (2023). OECD uk and the us: Current trends, future possibilities, and po-
Artificial Intelligence Review of Egypt. An in-depth review tential avenues for collaboration. Retrieved from https:
of Egypt’s AI policies and initiatives by the OECD. //www.rand.org/pubs/research reports/RRA3194-1.html.
[for Standardization (ISO), 2024a] for Standardiza- [Hacker et al., 2023] Hacker, P., Engel, A., and Mauer, M.
tion (ISO), O. (2024a). Iso/cd ts 24971-2. (2023). Regulating chatgpt and other large generative ai
models. In Proceedings of the 2023 ACM Conference on
[for Standardization (ISO), 2024b] for Standardiza- Fairness, Accountability, and Transparency (FAccT ’23),
tion (ISO), O. (2024b). Iso/iec awi tr 18988. pages 1112–1123. Association for Computing Machinery.
[Fraser et al., 2023] Fraser, A. G., Biasin, E., Bijnens, B., [Harrer, 2023] Harrer, S. (2023). Attention is not all you
Bruining, N., Caiani, E. G., Cobbaert, K., Davies, R. H., need: the complicated case of ethically using large lan-
Gilbert, S. H., Hovestadt, L., Kamenjasevic, E., Kwade, guage models in healthcare and medicine. EBioMedicine,
Z., McGauran, G., O’Connor, G., Vasey, B., and Rade- 90:104512–104512.
makers, F. E. (2023). Artificial intelligence in medical de-
vice software and high-risk medical devices – a review of [Hasan and Padman, 2006] Hasan, S. and Padman, R.
definitions, expert recommendations and regulatory initia- (2006). Analyzing the effect of data quality on the ac-
tives. 20(6), pages 467–491. curacy of clinical decision support systems: a computer
simulation approach. In AMIA ... Annual Symposium pro-
[Fritz, J. and Giardini, T., 2024] Fritz, J. and Giardini, T. ceedings. AMIA Symposium, pages 324–328.
(2024). Emerging contours of ai governance and the three
layers of regulatory heterogeneity. Digital Policy Alert [Health Canada, 2023] Health Canada (2023). Draft
Working Paper 24-001. guidance: Pre-market guidance for machine
learning-enabled medical devices. Retrieved
[Fui-Hoon Nah et al., 2023] Fui-Hoon Nah, F., Zheng, R., from https://www.canada.ca/en/health-canada/
Cai, J., Siau, K., and Chen, L. (2023). Generative ai and services/drugs-health-products/medical-devices/
chatgpt: Applications, challenges, and ai-human collabo- application-information/guidance-documents/
ration. Journal of Information Technology Case and Ap- pre-market-guidance-machine-learning-enabled-medical-devices.
plication Research, 25(3):277–304. html.
[GDPR, 2018] GDPR (2018). General data protection regu- [HK Government, 2024] HK Government (2024). Tr-
lation (gdpr) – final text neatly arranged. 008:2024(e) medical device administrative control sys-
[Ghosh et al., 2024] Ghosh, S., Guo, Y., Hung, P.-Y., Cough- tem (mdacs) artificial intelligence medical devices (ai-
lin, L., Bonar, E., Nahum-Shani, I., Walton, M., and Mur- md) technical reference. Technical report, Department of
phy, S. (2024). rebandit: Random effects based online rl Health, The Government of the Hong Kong Special Ad-
algorithm for reducing cannabis use. arXiv e-prints, pages ministrative Region.
arXiv–2402. [(HSA), 2022] (HSA), H. S. A. (2022). Regulatory guide-
[Giovanola and Tiribelli, 2022] Giovanola, B. and Tiribelli, lines for software medical devices - a life cycle approach.
S. (2022). Beyond bias and discrimination: redefining Technical report, Health Sciences Authority (HSA). Guid-
the ai ethics principle of fairness in healthcare machine- ance document for the regulatory requirements of software
learning algorithms. AI & Society, 38(2):549–563. medical devices in Singapore.
[Hurcombe and Neo, 2024] Hurcombe, L. and Neo, versus practice. In Proceedings of the AAAI/ACM Confer-
H. Y. (2024). China ai trailblazers in genai ence on AI, Ethics, and Society (AIES ’20), pages 72–78.
standards in asia. DLA Piper. Retrieved from Association for Computing Machinery.
https://www.dlapiper.com/en/insights/publications/ [Leckenby et al., 2021] Leckenby, E., Dawoud, D., Bouvy,
2024/04/china-ai-trailblazers-in-genai-standards-in-asia. J., and Jónsson, P. (2021). The sandbox approach and its
[IEC, 2024] IEC (2024). Iec pt 63450 dashboard. potential for use in health technology assessment: A litera-
[IMDA, 2023a] IMDA (2023a). Cataloguing llm evalua- ture review. Applied Health Economics and Health Policy,
tions: Draft for discussion. Last Accessed: 22 May, 2024. 19(6):857–869.
[IMDA, 2023b] IMDA (2023b). Generative ai: Implications [Lehmann, 2021] Lehmann, L. S. (2021). Ethical challenges
for trust and governance. Last Accessed: 22 May, 2024. of integrating ai into healthcare. In Springer EBooks,
pages 1–6.
[IMDA, 2024] IMDA (2024). Proposed model ai governance
framework for generative ai: Fostering a trusted ecosys- [Leslie et al., 2021] Leslie, D., Burr, C., Aitken, M., Cowls,
tem. Last Accessed: 22 May, 2024. J., Katell, M., and Briggs, M. (2021). Artificial intelli-
gence, human rights, democracy, and the rule of law: a
[Indian Council of Medical Research (ICMR), 2023] Indian primer. arXiv preprint arXiv:2104.04147.
Council of Medical Research (ICMR) (2023). Ethical
guidelines for application of artificial intelligence in [Lindström-Gommers and Mullin, 2019] Lindström-
biomedical research and healthcare. Prepared by DHR- Gommers, L. and Mullin, T. (2019). International
ICMR Artificial Intelligence Cell. Retrieved from https: conference on harmonization: Recent reforms as a driver
//main.icmr.nic.in/sites/default/files/upload documents/ of global regulatory harmonization and innovation in med-
Ethical Guidelines AI Healthcare 2023.pdf. ical products. Clinical Pharmacology & Therapeutics,
105(4):926–931.
[ISOTC, 2024] ISOTC (2024). Iso/tc 215.
[Luckett, 2023] Luckett, J. (2023). Regulating generative
[ITU FG-AI4H, 2022] ITU FG-AI4H (2022). Good ai: A pathway to ethical and responsible implementation.
practices for health applications of machine learn- Journal of Computing Sciences in Colleges, 39(3):47–65.
ing: Considerations for manufacturers and regulators.
Retrieved from https://www.itu.int/dms pub/itu-t/opb/fg/ [Magrabi et al., 2019] Magrabi, F., Ammenwerth, E., Mc-
T-FG-AI4H-2022-2-PDF-E.pdf. Nair, J. B., De Keizer, N. F., Hyppönen, H., Nykänen,
P., Rigby, M., Scott, P. J., Vehko, T., Wong, Z. S.-Y.,
[Jiang et al., 2017] Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, and Georgiou, A. (2019). Artificial intelligence in clin-
H., Ma, S., Wang, Y., Dong, Q., Shen, H., and Wang, Y. ical decision support: Challenges for evaluating ai and
(2017). Artificial intelligence in healthcare: past, present practical implications. Yearbook of Medical Informatics,
and future. Stroke and Vascular Neurology, 2(4):230–243. 28(01):128–134.
[Jindal et al., 2024] Jindal, J. A., Lungren, M. P., and Shah, [Meskó and Topol, 2023] Meskó, B. and Topol, E. J. (2023).
N. H. (2024). Ensuring useful adoption of generative arti- The imperative for regulatory oversight of large language
ficial intelligence in healthcare. Journal of the American models (or generative ai) in healthcare. npj Digital
Medical Informatics Association, 31(6):1441–1444. Medicine, 6:120.
[Johnson et al., 2015] Johnson, S. G., Byrne, M. D., Christie, [METI Japan, 2024] METI Japan (2024). (Draft)
B., Delaney, C. W., LaFlamme, A., Park, J. I., Pruinelli, AI guidelines for business. Retrieved from
L., Sherman, S. G., Speedie, S., and Westra, B. L. (2015). https://www.meti.go.jp/shingikai/mono info service/
Modeling flowsheet data for clinical research. In AMIA Jt ai shakai jisso/pdf/20240119 4.pdf.
Summits Transl Sci Proc, pages 77–81.
[Ministry of ICT and Innovation, Rwanda, 2020] Ministry
[Kanbach et al., 2023] Kanbach, D. K., Heiduk, L., Blueher, of ICT and Innovation, Rwanda (2020). National ai
G., Schreiter, M., and Lahmann, A. (2023). The genai is policy. Policy document outlining Rwanda’s national AI
out of the bottle: generative artificial intelligence from a strategy.
business model innovation perspective. Review of Man-
agerial Science. [MMR, 2024] MMR (2024). Artificial intelli-
gence in healthcare market size, growth, op-
[Karimian et al., 2022] Karimian, G., Petelos, E., and Silvia portunities & trends: Global industry analysis
(2022). The ethical issues of the application of artificial and forecast (2024-2030). Retrieved from https:
intelligence in healthcare: A systematic scoping review. //www.maximizemarketresearch.com/market-report/
AI and Ethics, 2(4):539–551. global-artificial-intelligence-ai-healthcare-market/
[Kolton, 2024] Kolton, E. A. (2024). Fda final rule harmo- 21261/.
nizes medical device quality system regulation with inter- [Murphy et al., 2021] Murphy, K., Di Ruggiero, E., Upshur,
national standard. Mondaq Business Briefing, pages NA– R., Willison, D. J., Malhotra, N., Cai, J. C., Lui, V., and
NA. Gibson, J. (2021). Artificial intelligence for good health:
[Krafft et al., 2020] Krafft, P. M., Young, M., Katell, M., A scoping review of the ethics literature. BMC Medical
Huang, K., and Bugingo, G. (2020). Defining ai in policy Ethics, 22(1).
[National Institute of Standards and Technology (NIST), 2023] [Reddy et al., 2020] Reddy, S., Allan, S., Coghlan, S., and
National Institute of Standards and Technology Cooper, P. (2020). A governance model for the applica-
(NIST) (2023). Artificial intelligence risk man- tion of ai in health care. Journal of the American Medical
agement framework (ai rmf 1.0). Retrieved from Informatics Association, 27(3):491–497.
https://doi.org/10.6028/nist.ai.100-1. [Regulatory Affairs Professionals Society (RAPS), 2024]
[Nestor Maslej et al., 2023] Nestor Maslej et al. (2023). Regulatory Affairs Professionals Society (RAPS) (2024).
The ai index 2023 annual report. Retrieved from Euro convergence: Experts concerned about incompat-
https://hai.stanford.edu/sites/default/files/2023-04/ ibilities between ai act and mdr. Retrieved from https:
HAI AI-Index-Report 2023.pdf. //www.raps.org/news-and-articles/news-articles/2024/5/
[OECD, 2019] OECD (2019). AI policy observatory portal. euro-convergence-experts-concerned-about-incompati.
Retrieved from https://oecd.ai/en/ai-principles. [Rocher et al., 2019] Rocher, L., Hendrickx, J. M., and
[OECD, 2024] OECD (2024). Catalogue of tools & met- de Montjoye, Y.-A. (2019). Estimating the success of
re-identifications in incomplete datasets using generative
rics for trustworthy AI. Retrieved from https://oecd.ai/en/
models. Nature Communications, 10(1).
catalogue/overview.
[Romagnoli et al., 2024] Romagnoli, A., Ferrara, F., Lan-
[of Medical Research (ICMR), 2023] of Medical Re-
gella, R., et al. (2024). Healthcare systems and artificial in-
search (ICMR), I. C. (2023). Ethical guidelines for
telligence: Focus on challenges and the international reg-
application of ai in biomedical research and healthcare.
ulatory framework. Pharmaceutical Research, 41(3):721–
Guidelines for the ethical application of AI in biomedical
730.
research and healthcare in India.
[Salahuddin et al., 2022] Salahuddin, Z., Woodruff, H. C.,
[Palaniappan et al., 2024] Palaniappan, K., Lin, E. Y. T., and
Chatterjee, A., and Lambin, P. (2022). Transparency of
Vogel, S. (2024). Global Regulatory Frameworks for the
deep neural networks for medical image analysis: A re-
Use of Artificial Intelligence (AI) in the Healthcare Ser-
view of interpretability methods. Computers in Biology
vices Sector. In Healthcare, volume 12, page 562. MDPI.
and Medicine, 140:105111–105111.
[Peter Stone et al., 2016] Peter Stone et al. (2016). Artifi-
[Salathé et al., 2018] Salathé, M., Wiegand, T., and Wenzel,
cial intelligence and life in 2030. One Hundred Year
M. (2018). Focus group on artificial intelligence for health.
Study on Artificial Intelligence: Report of the 2015-2016
arXiv preprint arXiv:1809.04797.
Study Panel. Retrieved from http://ai100.stanford.edu/
2016-report. [Saudi Food and Drug Authority (SFDA), 2022] Saudi Food
[Petrenko and Boloban, 2023] Petrenko, A. and Boloban, O. and Drug Authority (SFDA) (2022). Guidance on artificial
intelligence (ai) and machine learning (ml) technologies
(2023). Generalized information with examples on the
based medical devices. Retrieved from https://www.sfda.
possibility of using a service-oriented approach and arti-
gov.sa/sites/default/files/2023-01/MDS-G010ML.pdf.
ficial intelligence technologies in the industry of e-health.
Technology Audit and Production Reserves, 4(2 (72)):10– [Schiff et al., 2020] Schiff, D., Biddle, J., Borenstein, J., and
17. Laas, K. (2020). What’s next for ai ethics, policy, and
[Precedence Research, 2024] Precedence Research (2024). governance? a global overview. In Proceedings of the
AAAI/ACM Conference on AI, Ethics, and Society.
Generative ai in healthcare market size, growth report
2032. Last Accessed: 22 May, 2024. [Senate of Brazil, 2023] Senate of Brazil (2023). Bill no.
[Radu, 2021] Radu, R. (2021). Steering the governance of 2338/2023. In progress.
artificial intelligence: National strategies in perspective. [Simon and Aliferis, 2024] Simon, G. J. and Aliferis, C.
Policy and Society, 40(2):178–193. (2024). Artificial Intelligence and Machine Learning in
[Rahman et al., 2024] Rahman, M. A., Victoros, E., Ernest, Health Care and Medical Sciences. Springer International
Publishing.
J., Davis, R., Shanjana, Y., and Islam, M. R. (2024). Im-
pact of artificial intelligence (ai) technology in healthcare [Smith, 2021] Smith, H. (2021). Clinical ai: Opacity, ac-
sector: A critical evaluation of both sides of the coin. Clin- countability, responsibility and liability. AI & Society,
ical Pathology (Thousand Oaks, Ventura County, Calif.), 36(4):535–545.
17:2632010X241226887. [Soon and Tan, 2023] Soon, C. and Tan, B. (2023). Regulat-
[Reddy, 2023] Reddy, S. (2023). Navigating the ai revolu- ing artificial intelligence: Maximising benefits and min-
tion: the case for precise regulation in health care. Journal imising harms.
of Medical Internet Research, 25:e49989. [Stahl et al., 2022] Stahl, B. C., Rodrigues, R., Santiago, N.,
[Reddy, 2024] Reddy, S. (2024). Generative ai in healthcare: and Macnish, K. (2022). A european agency for artifi-
an implementation science informed translational path on cial intelligence: Protecting fundamental rights and ethical
application, integration and governance. Implementation values. Computer Law & Security Review, 45:105661–
Science, 19(1):27. 105661.
[StanDict, 2024] StanDict (2024). Ieee p2802 - standard for [Yan, 2024] Yan, W. (2024). Do not go gentle into that
the performance and safety evaluation of artificial intelli- good night: The european union’s and china’s different ap-
gence based medical device: Terminology. proaches to the extraterritorial application of artificial in-
[Thierer, 2023] Thierer, A. D. (2023). Flexible, pro- telligence laws and regulations. Computer Law & Security
innovation governance strategies for artificial intelligence. Review, 53:105965.
R Street Policy Study, 283. Retrieved from https://ssrn. [Yang et al., 2023] Yang, R., Tan, T. F., Lu, W.,
com/abstract=4423897. Thirunavukarasu, A. J., Ting, D. S. W., and Liu, N.
[TLR Health Europe, 2023] TLR Health Europe (2023). (2023). Large language models in health care: develop-
Embracing generative ai in health care. The Lancet Re- ment, applications, and challenges. Health Care Science,
gional Health-Europe, 30:100677. 2:255–263.
[UK Government, 2021] UK Government (2021). National [Zeng, 2022] Zeng, J. (2022). China’s ai approach: A top-
ai strategy. UK Government’s strategy on artificial intelli- down nationally concerted strategy? In Artificial Intelli-
gence. gence with Chinese Characteristics. Palgrave Macmillan,
Singapore.
[UK Government, 2024] UK Government (2024). The gov-
ernment data quality framework.
[University, 2023] University, M. (2023). Artificial intel-
ligence and the future of work: National agenda and
roadmap. National agenda and roadmap for AI and the
future of work.
[Vayena et al., 2018] Vayena, E., Blasimme, A., and Cohen,
I. G. (2018). Ai4people’s ethical framework for a good ai
society: Machine learning in medicine: Addressing ethical
challenges. PLOS Medicine, 15(11):e1002689.
[Viswa et al., 2024] Viswa, C. A., Bleys, J., Leydon, E.,
Shah, B., and Zurkiya, D. (2024). Generative AI in the
pharmaceutical industry: Moving from hype to reality.
[Walter, 2024] Walter, Y. (2024). Managing the race to the
moon: Global policy and governance in artificial intelli-
gence regulation—a contemporary overview and an anal-
ysis of socioeconomic consequences. Discovery of Artifi-
cial Intelligence, 4(14).
[Wang and Preininger, 2019] Wang, F. and Preininger, A.
(2019). Ai in health: State of the art, challenges, and future
directions. Yearbook of Medical Informatics, 28(01):16–
26.
[WHO, 2023a] WHO (2023a). Regulatory considerations on
artificial intelligence for health. Geneva: World Health
Organization; Licence: CC BY-NC-SA 3.0 IGO.
[WHO, 2023b] WHO (2023b). WHO outlines considera-
tions for regulation of artificial intelligence for health.
[WHO et al., 2024] WHO et al. (2024). Ethics and gov-
ernance of artificial intelligence for health: guidance on
large multi-modal models. WHO News.
[Wolters Kluwer, 2024] Wolters Kluwer (2024).
Wolters kluwer survey: Over two-thirds of u.s.
physicians have changed their mind, now view-
ing genai as beneficial in healthcare. Retrieved
from https://www.wolterskluwer.com/en/news/
gen-ai-clincian-survey-press-release#downloads.
[World Health Organization, 2021] World Health Organiza-
tion (2021). WHO Global Strategy on Digital Health
(2020–2025).
[Wu, 2023] Wu, Y. (2023). China’s Interim Measures to Reg-
ulate Generative AI Services: Key Points.
A Supplemental Tables

Table 1: Documentation and Transparency

Law / Regulation / Act / Policy / Status Quote


Guidance
Government response to ‘Safe and Re- Consideration on AI Safety Standard “Transparency – transparency regard-
sponsible AI in Australia’ discussion ing model design and data underpin-
paper (Australia) ning AI applications; labelling of AI
systems in use and/or watermarking of
AI generated content.”
Ontario’s Trustworthy Artificial Intelli- Early stages: requesting feedback “No AI in secret: This means that we
gence (AI) Framework (Canada) will provide a clear understanding of
how and when AI is used.”
Plan for Digital Regulation (UK) In October 2023, an Outcomes Mon- “Keeping the UK safe and secure on-
itoring Framework was published to line: Objectives: Improve users’ ability
track progress against the Plan’s objec- to keep themselves safe online through
tives using key indicators. greater platform transparency and non-
legislative support measures.”
General Data Protection Regulation Published and in force “Recital 58: The Principle of Trans-
(GDPR) (EU) parency: The principle of transparency
requires that any information addressed
to the public or to the data subject be
concise, easily accessible and easy to
understand, and that clear and plain
language and, additionally, where ap-
propriate, visualisation be used.”
EU AI Act (EU) In March 2024, the European Parlia- “The AI Act introduces transparency
ment voted 71-8 to formally adopt the obligations for all general-purpose AI
agreed text of the AI Act. Expected models to enable a better under-
to be officially published in May/June standing of these models and addi-
2024. tional risk management obligations for
very capable and impactful models.
These additional obligations include
self-assessment and mitigation of sys-
temic risks, reporting of serious inci-
dents, conducting test and model evalu-
ations, as well as cybersecurity require-
ments.”
AI Strategy 2022 (Japan) AI Strategy 2022 outlined Japan’s AI ”Part II(3): Overcoming Vulnerabili-
policies as of last year, the govern- ties Associated with AI and Digitaliza-
ment’s approach seems to be evolving tion – Establishing Responsible AI and
towards integrating AI initiatives under Strengthening Cybersecurity as Cyber-
its broader innovation strategy frame- netic Resilience: “It is extremely im-
work from 2023 onwards, though there portant that the social infrastructure
are voices advocating for a fresh dedi- formed by AI and digitization is fair,
cated national AI strategy as well. transparent, operated in a responsible
manner, and secure.” Part III(3): Ini-
tiatives to promote implementation in
society of AI: (1) Break the black box
nature of AI and resolve concerns:“In
addition, it is necessary to improve
the reliability of AI through initia-
tives related to Explainable AI (XAI),
which breaks the black box nature of
AI by enhancing the transparency and
accountability of AI processing, and
through technological development in
the area of integration of cyber security
and AI.”
Act on Improving Transparency and Published and enforced “The Act requires the specified digi-
Fairness of Digital Platforms (TFDPA) tal platform providers to disclose terms
(Japan) and conditions and other information,
develop procedures and systems to en-
sure their fairness in a voluntary man-
ner and to submit a report every fis-
cal year on the overview of measures
that they have conducted to which self-
assessment results are attached.”
National Strategy for Digital Skills Adopted and implemented “The Strategy Italy 2025 sets out a clear
(Italy) horizon for ”inclusive and sustainable
development” as it defines a course of
action that moves towards the challenge
of an ethical, inclusive, transparent, and
sustainable innovation for social well-
being.”
Bill No. 2338/2023 (Brazil) In progress Article 18, Part VII:”It will be up to the
competent authority to update the list of
excessive risk or high risk artificial in-
telligence systems, identifying new hy-
potheses, based on at least one of the
following criteria:– low degree of trans-
parency, explainability and auditabil-
ity of the artificial intelligence system,
which makes its control or supervision
difficult.“ (Translated)
Chapter IV, Section I, Article 19: “Ar-
tificial intelligence agents are estab-
lished adequate governance structures
and internal processes to ensure the se-
curity of systems and compliance with
the rights of affected people, under the
terms set out in Chapter II of this Law
and the relevant legislation, which will
include, at least: I – transparency mea-
sures regarding the use of systems of
systems artificial intelligence in inter-
action with natural people, which in-
cludes the use of adequate and suffi-
ciently clear human-machine interfaces
and informative; II – transparency re-
garding the governance measures gov-
erned in the development and use of the
artificial intelligence system by Organi-
zation.”
Article 24: “The impact assess-
ment methodology will contain, at
the same time, least the following
steps. . . transparency measures to the
public, especially to potential users of
the system, regarding residual risks,
mainly when it involves a high degree
of harm or danger to health or user
safety. . . ”
Article 3: “The development, imple-
mentation and use of systems of ar-
tificial intelligence will observe good
faith and the following principles:...–
transparency, explainability, intelligi-
bility and auditability.”
AI and Data Act (AIDA) (Canada) Introduced in June 2022, proposed “Transparency means providing the
amendments in November 2023 public with appropriate information
about how high-impact AI systems are
being used. The information provided
should be sufficient to allow the public
to understand the capabilities, limita-
tions, and potential impacts of the sys-
tems.”
Health Canada: Premarket guidance Draft “From our perspective, MLMD lifecy-
for ML-enabled MD (Canada) cle includes... transparency”
Egyptian Charter on Responsible AI Published “End user right to know when interact-
(Egypt) ing with AI system, ability to challenge
AI outcomes, boost awareness and de-
velop pedagogy in AI”
National AI Policy (Rwanda) Approved Recommendation No. 8: “By strength-
ening the capacity of regulatory au-
thorities to understand and regulate AI
aligned with emerging global standards
and best practices, we will build trans-
parency and trust with the public.”
Medical Device Administrative Con- Published Section 4.4: “For AI-MD with CLC,
trol System (MDACS) AI Medical De- complete information on the learning
vice TR-008 (Hong Kong) process including the process controls,
verification, on-going model monitor-
ing measures shall be clearly presented
for review in the application for listing
AI-MD.”
Guidance on AI/ML based Medical Published Quality Management Systems: “The
Devices (Saudi Arabia) QMS shall assist the organization to
produce a systematic documentation
of the AI/ML and its supporting de-
sign and development, including a ro-
bust and documented configuration and
change management process, and iden-
tifying its constituent parts, to provide
a history of changes made to it, and to
enable recovery/recreation of past ver-
sions of the software, i.e., traceability
of the AI/ML.”
Ethical Guidelines for application of AI Published Section 1.3: Trustworthiness: “Ex-
in biomedical research and healthcare plainable, i.e., the results and inter-
(India) pretations provided by AI-based algo-
rithms should be explainable based on
scientific plausibility. . . The end-user
must be provided with adequate infor-
mation in a language they can under-
stand to ensure that they are not being
manipulated by the AI technologies.”

Table 2: Risk Management

Law / Regulation / Act Status Quote


Government response to ‘Safe and Re- Consideration on AI Safety Standard “We have heard loud and clear that
sponsible AI in Australia’ discussion Australians want stronger guardrails to
paper (Australia) manage higher-risk AI.” “The Govern-
ment’s response is targeted towards the
use of AI in high-risk settings, where
harms could be difficult to reverse,
while ensuring that the vast majority of
low risk AI use continues to flourish
largely unimpeded.”
National Policy Roadmap for AI regu- Published Recommendations: Point 2: “To en-
lation (Australia) sure AI in healthcare is safe, effec-
tive and therefore does not harm pa-
tients, it needs to be developed and de-
ployed within a robust risk-based safety
framework” Point 1: “To better coordi-
nate and harmonise the responsibilities
and activities of those entities respon-
sible for oversight of AI safety, effec-
tiveness, and ethical and security risks,
establish a National AI in Healthcare
Council.”
Bill No. 2338/2023 (Brazil) In progress Chapter III, Risk Categorization: Arti-
cle 13: “Prior to its placing on the mar-
ket or use in service, every artificial in-
telligence system will undergo evalua-
tion preliminary carried out by the sup-
plier to classify its level of risk. . . There
will be registration and documentation
of the preliminary assessment carried
out by the supplier for liability and ac-
countability purposes in the event that
the artificial intelligence system is not
classified as risk high. . . the result of
the reclassification identifies the artifi-
cial intelligence as high risk, conduct-
ing impact assessment algorithmic ap-
proach and the adoption of other gov-
ernance measures provided for in the
Chapter IV will be mandatory.” Calls
out excessive risk, high risk, Gover-
nance Measures for High-Risk Artifi-
cial Intelligence Systems (Section II),
Algorithmic Impact Assessment (Sec-
tion III), Codes of Good Practice and
Governance (Chapter VI)
EU AI Act (EU) In March 2024, the European Parlia- Article 5 - Prohibited AI practices, Ar-
ment voted 71-8 to formally adopt the ticle 6 - Risk Management System for
agreed text of the AI Act. Expected High-Risk AI Systems, Article 7 - Ad-
to be officially published in May/June ditional Requirements for High-Risk
2024. AI Systems, Article 9 - Biometric Cat-
egorization Systems, Article 52 - Clas-
sification Rules for High-Risk AI Sys-
tems, Article 53 - Managing Risks Re-
lated to General Purpose AI Systems,
Article 61 - AI Systems Presenting
Limited Risk, Annex III, Annex VII
National AI Strategy (UK) Current guiding policy framework “The government is also exploring how
privacy enhancing technologies can re-
move barriers to data sharing by more
effectively managing the risks associ-
ated with sharing commercially sensi-
tive and personal data.”
AI Strategy 2022 (Japan) AI Strategy 2022 outlined Japan’s AI Part II (2) (3): “In order to cope with
policies as of last year, the govern- increasingly complex and sophisticated
ment’s approach seems to be evolving attacks and the risk of vulnerability
towards integrating AI initiatives under that increases as systems become more
its broader innovation strategy frame- complex, active consideration should
work from 2023 onwards, though there be given to the use of AI, such as in-
are voices advocating for a fresh dedi- formation gathering, analysis, support
cated national AI strategy as well. functions, and AI for automation of de-
fense in order to help cyber security
analysts make decisions.” Part III (3)
(1):“. . . efforts to realize ”Responsible
AI” are also expected through initia-
tives related to the ELSI of AI, such
as designing AI with ethical consider-
ations in the first place and conducting
audits in the AI utilization cycle.”
National Strategic Programme for Arti- Adopted and Published Guiding Principles: “On the other
ficial Intelligence (Italy) hand, the Government is committed to
governing AI and mitigating its poten-
tial risks, especially to safeguard hu-
man rights and ensure an ethical de-
ployment of AI.”
AI and Data Act (AIDA) (Canada) Introduced in June 2022, proposed “The Government has developed a
amendments in November 2023 framework intended to ensure the
proactive identification and mitigation
of risks in order to prevent harms
and discriminatory outcomes.” High-
impact AI systems: considerations and
systems of interest: “The risk-based ap-
proach in AIDA, including key defini-
tions and concepts, was designed to re-
flect and align with evolving interna-
tional norms in the AI space – includ-
ing the EU AI Act, the Organization
of Economic Co-operation and Devel-
opment (OECD) AI Principles, and the
US National Institute of Standards and
Technology (NIST) Risk Management
Framework (RMF) – while integrating
seamlessly with existing Canadian le-
gal frameworks.” Regulatory Require-
ments: “AIDA would require that ap-
propriate measures be put in place to
identify, assess, and mitigate risks of
harm or biased output prior to a high-
impact system being made available for
use.”
Health Canada: Premarket guidance Draft “From our perspective, MLMD lifecy-
for ML-enabled MD (Canada) cle includes risk management.”
Egyptian Charter on Responsible AI Published “AI risk assessment, reduce harm”
(Egypt)
National AI Policy (Rwanda) Published “Rwanda’s Guidelines on the Ethical
Development and Implementation of
Artificial Intelligence, developed by
RURA address the range of risks in the
AI system lifecycle and considerations
for responsible and trustworthy adop-
tion of AI in Rwanda.”
Regulatory Guidelines for Software Published Section 9.2: “If the AI-MD is deployed
Medical Devices – A Life Cycle Ap- in a decentralised environment, there
proach (Singapore) should be robust processes in place to
address the risks involved in such a de-
centralised model. Other process con-
trols for consideration includes main-
taining traceability, performance mon-
itoring and change management.”
Guidance on AI/ML based Medical Published Risk management: “Data scientists
Devices (Saudi Arabia) should be included in the cross-
functional team that perform risk man-
agement tasks. . . There should be a risk
management plan that includes...”
Ethical Guidelines for application of AI Published Section 1.2: Safety and Risk Mini-
in biomedical research and healthcare mization: “Some of the risk minimiza-
(India) tion and safety points are mentioned
below. . . A robust set of control mech-
anisms is necessary to prevent unin-
tended or deliberate misuse...”

Table 3: Data Quality

Law / Regulation / Act Status Quote


National Policy Roadmap for AI regu- Published Recommendations: Point 10:
lation (Australia) “Changes may include disclosure
to patients that their deidentified pa-
tient data is being used to train AI and
that clinical recommendations are be-
ing based on information provided by
AI.” Point 13: “Develop mechanisms
to provide industry with ethical and
consent-based access to clinical data to
support AI development and leverage
existing national biomedical data
repositories...To maximise national
benefit these mechanisms should be
based on consistent use of identifiers
across the healthcare system and
national interoperability standards (e.g.
FHIR, SNOMED CT) and be aligned
with minimum national datasets and
software vendor conformance profiles.”
Data Quality Framework (UK) Guidance, published Framework provides: ”Data quality
principles to support organisations to
create a data quality culture - A guide
to the data lifecycle to help organisa-
tions to identify and mitigate poten-
tial data quality issues at all stages -
Data quality dimensions against which
regular assessments of data quality can
be made - Data quality action plans,
used to identify practical steps to as-
sess data quality and make targeted im-
provements - Root cause analysis to en-
sure data quality work addresses issues
at source - Metadata guidance to sup-
port better use of metadata to commu-
nicate and interpret quality - Communi-
cating quality guidance, including sug-
gested approaches for clearly commu-
nicating quality to users - An introduc-
tion to data maturity models, for those
who want to take a holistic approach to
assessing and improving data quality.”
European Health Data Space (EHDS) Formally approved Secondary use of health data and the
(EU) EHDS: “This document identifies sev-
eral policy options for each barrier,
ranging from proposals on improving
the clarity of EU data protection law
to proposals for improving data quality
and interoperability.” Key recommen-
dations of TEHDAS 5 : “The TEHDAS
data quality framework contains the
main elements in data quality. These
include the steps in the process of
preparing data for research and innova-
tion. . . the European Medicines Agency
has used TEHDAS’ data quality work
in its efforts to leverage routine data in
the real-world evaluation of drugs and
medical devices.”
AI Strategy 2022 (Japan) AI Strategy 2022 outlined Japan’s AI Part III (3) (2): “In Japan, there is con-
policies as of last year, the govern- siderable accumulation of high-quality
ment’s approach seems to be evolving data in each field. Therefore, efforts
towards integrating AI initiatives under should be made to enhance data that
its broader innovation strategy frame- supports AI utilization by linking and
work from 2023 onwards, though there converting these data in a form suitable
are voices advocating for a fresh dedi- for AI. With regard to the excellent data
cated national AI strategy as well. base, it is expected that a ’data eco-
nomic zone’ centering on Japan will be
constructed by actively engaging in co-
operation with other countries.” Part IV
(2) (3): Implementation of initiatives
that contribute to assurance and confir-
mation of the quality of collected big
data”
Health Canada: Premarket guidance Draft “From our perspective, MLMD life-
for ML-enabled MD (Canada) cycle includes. . . describing the selec-
tion and management of data for an
MLMD.”
EU AI Act (EU) In March 2024, the European Parlia- Article 7: Data and Data Governance:
ment voted 71-8 to formally adopt the For high-risk AI systems, this clause
agreed text of the AI Act. Expected mandates using high-quality training,
to be officially published in May/June validation and testing data sets that are
2024. relevant and representative of the spe-
cific geographical, behavioral or func-
tional setting within which the AI sys-
tem is intended to be used.
National AI Policy (Rwanda) Published Implementation Plan Summary: Prior-
ity Area 3: Robust data strategy: “Out-
put: Increased availability and access to
quality data for training AI models. In-
dicator: Size (bytes) of open AI-ready
data available to the research and in-
novation community Number of times
the open datasets are accessed or down-
loaded over time.”
Regulatory Guidelines for Software Published Section 9.2: “...For example, there
Medical Devices – A Life Cycle Ap- should be appropriate quality checks
proach (Singapore) to ensure that the quality of learning
datasets are equivalent to the quality
of the original training datasets. There
should be validation processes incorpo-
rated within the system. . . ”
Medical Device Administrative Con- Published Section 4.1: Dataset: “The source and
trol System (MDACS) AI Medical De- size of training, validation and test
vice TR-008 (Hong Kong) dataset shall be defined. Information on
labelling of datasets, curation, annota-
tion or other steps shall be clearly pre-
sented. Description on dataset cleaning
and missing data imputation shall also
be defined.”
Ethical Guidelines for application of AI Published Section 1.6: Optimization of Data
in biomedical research and healthcare Quality: “The manufacturer has the
(India) responsibility to eliminate the bias.
Demonstration of a bias-free AI tech-
nology with the optimum function be-
fore a competent authority is manda-
tory for resuming operations. . . Train-
ing data must not have any sampling
bias. Such sampling bias may inter-
fere with data quality and accuracy. Re-
searchers must ensure data quality

Table 4: Intended Use, Analytical and Clinical Validation

Law / Regulation / Act Status Quote


AI and Data Act (AIDA) (Canada) Proposed amendments, unclear when it “Validity means a high-impact AI sys-
will take effect tem performs consistently with in-
tended objectives. Robustness means a
high-impact AI system is stable and re-
silient in a variety of circumstances.”

5
The TEHDAS1 project (ended in July 2023) developed joint European principles for the secondary use of health data. The work involved
25 countries. The TEHDAS2 joint action started in May 2024 and it will build on the work of previous TEHDAS1.
Health Canada: Premarket guidance Draft “From our perspective, MLMD
for ML-enabled MD (Canada) lifecycle includes. . . design, testing
and evaluation, clinical validation,
post-market performance monitoring”
“The intended use or medical purpose
should be made clear in the appli-
cation. . . including device function
information.”
EU AI Act (EU) In March 2024, the European Parlia- Article 17: Quality management sys-
ment voted 71-8 to formally adopt the tem: “examination, test and validation
agreed text of the AI Act. Expected procedures to be carried out before,
to be officially published in May/June during and after the development of the
2024. high-risk AI system” Annex IV: Tech-
nical documentation referred to in Ar-
ticle 11(1): the validation and testing
procedures used” Article 3 (53): ‘real-
world testing plan’ means a document
that describes the objectives, methodol-
ogy, geographical, population and tem-
poral scope, monitoring, organisation
and conduct of testing in real-world
conditions;
Regulatory Guidelines for Software Published Section 3.5 (Clinical evaluation): “The
Medical Devices – A Life Cycle Ap- clinical evaluation process establishes
proach (Singapore) that there is a valid clinical association
between the software output and the
specified clinical condition according
to the product owner’s intended use.”
“Test protocol and report for verifica-
tion and validation of the AI-MD, in-
cluding the acceptance.”
Medical Device Administrative Con- Published Section 4.1: Performance and Clinical
trol System (MDACS) AI Medical De- Validation: “Validation and verification
vice TR-008 (Hong Kong) test report(s) shall be provided to sub-
stantiate such performance claim (e.g.
diagnostic sensitivity, diagnostic speci-
ficity, accuracy).”
Guidance on AI/ML based Medical Published Clinical evaluation: “A manufacturer
Devices (Saudi Arabia) of AI/ML-based medical devices is
expected to provide clinical evidence
of the device’s safety, effectiveness
and performance before it can be
placed on the market.” Analytical val-
idation: “Analytical validation should
be done using large independent refer-
ence dataset reflecting the intended pur-
pose and the diversity of the intended
population and setting.” Intended use:
“If the Artificial Intelligence (AI) and
Machine Learning (ML) devices are in-
tended by the Product developer to be
used for investigation, detection diag-
nosis, monitoring, treatment, or man-
agement of any medical condition, dis-
ease, anatomy or physiological process,
it will be classified as a medical device
subject to SFDA’s regulatory controls.”
Ethical Guidelines for application of AI Published Section 1.6: Optimization of data qual-
in biomedical research and healthcare ity: “These inherent problems related
(India) to data can be minimized by rigor-
ous clinical validation before any AI-
based technology is used in health-
care.” Section 1.10: “AI technology in
healthcare must undergo rigorous clin-
ical and field validation before applica-
tion on patients/participants.” Section 2
of the document, “Guiding Principles
for stakeholders involved in develop-
ment, validation and deployment” de-
scribes in detail how AI-based solu-
tions for healthcare must be validated.
Section 2.2 describes guiding princi-
ples for analytical and clinical valida-
tion.

Table 5: Privacy and Data Protection

Law / Regulation / Act Status Quote


Plan for Digital Regulation (UK) In October 2023, an Outcomes Mon- “Objectives: Citizens are empowered
itoring Framework was published to to be safe online, and trust they are pro-
track progress against the Plan’s objec- tected from online harms beyond their
tives using key indicators. control; Organisations have the capa-
bilities and resilience to preserve their
digital security, and security is factored
into new products and services from the
outset; The security of UK networks
and critical infrastructure is protected.”
Egyptian Charter on Responsible AI Published “Final human determination always
takes place especially for sensitive or
mission-critical AI applications, data
protection and AI risk assessment”
National AI Policy (Rwanda) Published Implementation Plan Summary: Prior-
ity Area 2: N24: “Publish guidance
targeted towards industry and users on
how existing privacy legislation fits
with cloud computing.” N30: “Enforce
Data Protection and Privacy Law.”
Regulatory Guidelines for Software Published Section 8 (cybersecurity) [multiple ref-
Medical Devices – A Life Cycle Ap- erences]
proach (Singapore)
Guidance on AI/ML based Medical Published Risk management: “There should be a
Devices (Saudi Arabia) risk management plan that includes cy-
bersecurity risks.”
Ethical Guidelines for application of AI Published Section 1.4: Data Privacy: “Individ-
in biomedical research and healthcare ual patients’ data should preferably be
(India) anonymized unless keeping it in an
identifiable format is essential for clin-
ical or research purposes. All algo-
rithms handling data related to patients
must ensure appropriate anonymization
before any form of data sharing.”
European Health Data Space (EHDS) Published “The EHDS will: empower individu-
als to take control of their health data
and facilitate the exchange of data for
the delivery of healthcare across the
EU.” “In the future, decisions on us-
ing health data will be taken by a spe-
cialised authority in each country, a
health data access body. Access to data
would only be allowed for specific pur-
poses. The TEHDAS 6 project devel-
oped a data quality framework which
aims to ensure that health data collected
across Europe and reused for policy-
making, regulation and research is re-
liable enough and fit for purpose.”
Bill introducing Consumer Privacy Consideration stage in the House of “Enhancing Canadians’ control and
Protection Act (Canada) Commons committee consent. . . New rules will require
transparency on the use of auto-
mated systems—such as artificial
intelligence—that make decisions and
predictions about Canadians. . . .Clearer
rules for the handling of de-identified
information will facilitate its use
for the research and development of
innovative goods and services.”
General Data Protection Regulation Published and in force Recital 1 (Data Protection as a Fun-
(GDPR) (EU) damental Right): “The protection of
natural persons in relation to the pro-
cessing of personal data is a funda-
mental right” Recital 53 (Processing
of Sensitive Data in Health and So-
cial Sector): “Special categories of per-
sonal data which merit higher protec-
tion should be processed for health-
related purposes only where neces-
sary. . . this Regulation should pro-
vide for harmonised conditions for the
processing of special categories of per-
sonal data concerning health, in respect
of specific needs, in particular where
the processing of such data is carried
out for certain health-related purposes
by persons subject to a legal obliga-
tion of professional secrecy.” Recital 54
(Processing of Sensitive Data in Pub-
lic Health Sector): “Such processing
of data concerning health for reasons
of public interest should not result in
personal data being processed for other
purposes by third parties. . . The pro-
cessing of special categories of per-
sonal data may be necessary for reasons
of public interest in the areas of public
health without consent of the data sub-
ject.”
National Data Strategy (UK) Published “A robust regime is already in place:
there are categories of data sharing that
are not permitted subject to a con-
sent framework and/or can only be
done in certain ways to manage those
risks, which the government continues
to keep under review. Levers to man-
age this include the Information Com-
missioner’s Office (ICO)’s data shar-
ing code of practice, the Centre for the
Protection of National Infrastructure’s
Security-Minded approach to Open and
Shared Data, the Official Secrets Act,
the Information Management Frame-
work which is currently under develop-
ment, and other relevant legislation and
guidance. The Central Digital and Data
Office’s Data Ethics Framework, which
is designed to guide public sector use
of data, may also inform how organisa-
tions in the private and third sector use
data.”
National Cyber Strategy (UK) Published Foreword: “We see this in our re-
sponse to international health emer-
gencies and in our promotion of Net
Zero targets. . . ” Annex B: The NIS
Regulations established a new regu-
latory regime within the UK that re-
quires designated operators of essen-
tial services (OESs) and relevant dig-
ital service providers (RDSPs) to put
in place technical and organisational
measures to secure their network and
information systems..It applies to sec-
tors. . . healthcare” Pillar 3, Objective 3:
“Our activity in and in relation to cy-
berspace has enhanced global stabil-
ity. . . This will include but not be lim-
ited to tackling internet shutdowns, bias
in Artificial Intelligence algorithms and
increasing online safety.”
National Policy Roadmap for AI regu- Published Priority Area 1: “Patients must re-
lation (Australia) ceive safe, effective, and ethical care
from AI-enabled healthcare services
and be assured sensitive healthcare
data are protected from cybersecurity
threats, privacy breaches or unautho-
rised use. . . Uploading sensitive patient
data into a non-medical AI like Chat-
GPT hosted on United States servers
is also problematic from a privacy and
consent perspective.”
Ontario’s Trustworthy Artificial Intelli- Early stages: requesting feedback Principles for Ethical Use of AI [Beta]:
gence (AI) Framework “Data enhanced technologies should
be designed and operated in a way
throughout their life cycle that respects
the rule of law, human rights, civil lib-
erties, and democratic values. These in-
clude dignity, autonomy, privacy, data
protection, non-discrimination, equal-
ity, and fairness.”
Act to Modernize Legislative Provi- Passed Division II (8.1): “...any person who
sions respecting the Protection of Per- collects personal information from the
sonal Information (Quebec) person concerned using technology that
includes functions allowing the per-
son concerned to be identified, located
or profiled must first inform the per-
son. . . “Profiling” means the collection
and use of personal information to as-
sess certain characteristics of a natu-
ral person, in particular for the purpose
of analyzing that person’s. . . health.”
Division III, Section 1(18.2): “no in-
formation relating to a person’s health
may be communicated without the con-
sent of the person concerned unless 100
years have elapsed since the date of the
document.”
AI Strategy 2022 (Japan) AI Strategy 2022 outlined Japan’s AI Part II (2) (3): “The realization of Re-
policies as of last year, the govern- sponsible AI is a requirement that must
ment’s approach seems to be evolving be secured in the promotion of digi-
towards integrating AI initiatives under tization. To this end, it will be im-
its broader innovation strategy frame- portant to promote further R & D and
work from 2023 onwards, though there implementation in society of a series
are voices advocating for a fresh dedi- of technologies such as Explainable AI
cated national AI strategy as well. (XAI) and Federated Learning, which
can be learned while protecting privacy
and confidential information, as well as
to build platforms and to exercise lead-
ership in their operation.“
Bill No. 2338/2023 (Brazil) In progress Article 2, Part VIII: “The development,
implementation and use of artificial in-
telligence systems in Brazil are based
on: privacy, data protection and infor-
mational self-determination” Article 5,
Part VI: “Persons affected by artificial
intelligence systems have the following
rights, to be exercised in the manner
and under the conditions described in
this Chapter. . . privacy, data protection
and informational self-determination;”
Article 19, Part IV: “Artificial intelli-
gence agents shall establish governance
structures and internal processes capa-
ble of ensuring the security of the sys-
tems and compliance with the rights of
affected persons, under the terms pro-
vided for in Chapter II of this Law and
the relevant legislation, which shall in-
clude, at least. . . legitimacy of data pro-
cessing in accordance with data protec-
tion legislation, including through the
adoption of privacy measures by design
and by default and the adoption of tech-
niques that minimize the use of per-
sonal data.”

6
The TEHDAS1 project (ended in July 2023) developed joint European principles for the secondary use of health data. The work involved
25 countries. The TEHDAS2 joint action started in May 2024 and it will build on the work of previous TEHDAS1.

You might also like