Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis
Global AI Governance in Healthcare: A Cross-Jurisdictional Regulatory Analysis
approach on AI in healthcare [Wang and Preininger, 2019; models (LLMs) like ChatGPT and the growing promise of
Chae, 2020], there is limited discussion on the status, direc- GenAI to transform clinical workflows, research and medical
tion or existing gaps in AI healthcare regulations for other key affairs [Viswa et al., 2024].
countries or regions beyond some brief mentions [Abdullahi
Tsanni, 2024]. Murphy et al. [Murphy et al., 2021] distinctly 3 Material and Methods
highlight the lack of research on AI ethics in Low- or Middle-
Income Countries (LMICs) and public health settings. They In this work, we have conducted a legal analysis of 25 pub-
emphasize the urgent need for further investigation into the licly available laws, guidance documents, and regulations is-
ethical implications of AI in these contexts to ensure its eth- sued in 14 legal jurisdictions (EU, UK, Australia, Canada,
ical development and implementation on a global scale. The Japan, Italy, Brazil, Egypt, Rwanda, Saudi Arabia, Singapore,
scope of our work has been selected keeping in mind this ob- India, China, and Hong Kong). The choice of nations under
servation, with the larger goal of increasing awareness and the scope of this paper has been made to capture a truly global
representation in conversations relating to the regulation of picture of regulation as elaborated in Section 21 .
AI in healthcare. We have aimed to incorporate a comprehensive range of
regulations related to AI in healthcare. However, currently,
There has been a myriad of works on the application of AI
the global regulatory landscape predominantly addresses the
in healthcare [Romagnoli et al., 2024; Goldberg et al., 2024;
use of AI in healthcare under the regulatory frameworks es-
Jiang et al., 2017; Ghosh et al., 2024]. Existing literature
tablished for medical devices, specifically Software as a Med-
discusses the requirement of ethical principles in AI gover-
ical Device (SaMD) [Palaniappan et al., 2024]. Also, most
nance and provides a high-level discussion of these princi-
current AI regulations prioritize healthcare but do not provide
ples in the context of AI in healthcare [Karimian et al., 2022;
healthcare-specific regulations [Reddy, 2023]. Therefore, we
Giovanola and Tiribelli, 2022; Lehmann, 2021]. This paper
have analyzed both sector-agnostic generic AI regulations
builds on existing literature to analyze laws, regulations, poli-
and healthcare-specific AI regulations, mostly in the medi-
cies, and guidance documents, within our scope, that demon-
cal device space. Our definition of sector-agnostic generic
strate alignment with the WHO’s key principles of ethical AI
AI regulation refers to regulations that govern the use of AI
regulation.
across various sectors and industries, without focusing on the
While applications of generative AI (GenAI) and its gover- specific applications or risks associated with AI in healthcare
nance have been discussed in existing literature [Meskó and specifically. These regulations provide a broad framework
Topol, 2023; Reddy, 2024; Jindal et al., 2024], our work for AI governance, addressing general principles and require-
extends this discussion to country-specific GenAI policies
(China and Singapore). This paper also touches upon current 1
See comment referencing [Murphy et al., 2021] and [Abdullahi
legislation on GenAI, given the explosion of large language Tsanni, 2024] in Section 2 of this paper.
ments that apply to AI systems regardless of their specific use consider the clinical setting.
case. The review also includes national policies in draft or AI models trained on unrepresentative data can perpetu-
implementation stages, developed by governments, agencies, ate and worsen existing health disparities due to societal dis-
and standard bodies. crimination or small sample sizes [Reddy et al., 2020]. In
This review considers a mix of four comparative param- clinical settings, AI systems must prioritize patient privacy,
eters: sector-agnostic generic AI regulations, healthcare- protect against harm, and ensure patients have control over
specific AI regulations, non-binding instruments, and bind- their data usage [Vayena et al., 2018]. Despite the promise of
ing legal instruments. The regulatory frameworks and guide- deep learning models in medical imaging and risk prediction,
lines for AI in healthcare across these 14 jurisdictions were their lack of interpretability and explainability poses signif-
identified and downloaded from their respective government icant challenges in healthcare, where transparency is crucial
healthcare websites for analysis in this review. The search for clinical decision-making [Char et al., 2018]. When se-
focused on key terms such as regulatory frameworks, legisla- lecting from multiple algorithms, it is crucial to evaluate risks
tions, laws, acts, strategies, policies, and guidelines. related to data quality and the suitability of the foundational
We examine quotes and provide references from each of data to new contexts, such as variations in population and dis-
these legal instruments to demonstrate how they align with ease patterns [Magrabi et al., 2019]. Therefore, evaluation
the WHO’s recommended principles for the ethical use of guidelines for AI systems should include assessing and col-
AI in healthcare. The principles include documentation and lecting evidence on data quality to prevent unintended conse-
transparency, risk management, intended use, clinical and an- quences and harmful outcomes [Magrabi et al., 2019].
alytical validation, data quality, and privacy. This alignment The following sections will elaborate on each of these prin-
allows us to identify how nations across the world are in- ciples and analyze how different countries are positioning
corporating WHO guidelines through their strategy, policy, themselves with respect to them.
and laws. We identify the legal clauses of jurisprudence tied
Documentation and Transparency
to WHO’s key principles of ethical AI regulation and re-
Transparency ensures that relevant stakeholders receive ap-
flect its application upon regulations. We also cross-reference
propriate information about AI systems [Dı́az-Rodrı́guez
publicly available work from international collaboratives and
et al., 2023]. This can be achieved through different levels of
technical focus groups on healthcare-AI regulations to show
transparency, including simulatability (human understanding
how their work has and will influence national policies on AI
of the model), decomposability (explaining model behavior
in healthcare.
and components), and algorithmic transparency (understand-
ing the model’s process and output) [Barredo Arrieta et al.,
4 Global Regulatory Landscape of AI 2020; Dı́az-Rodrı́guez et al., 2023]. The ability of AI to learn
4.1 Definitions independently from data poses a challenge when it comes to
explaining the decision-making rationale of some AI models
There is a lack of agreement on what is defined by AI [Krafft [Florian Königstorfer and Thalmann, S., 2022], posing prob-
et al., 2020]. While most nations define specific aspects of AI,
lems for their application in clinical settings [Smith, 2021].
such as AI systems [Dwivedi et al., 2021], there is an absence
Therefore, it is necessary to establish instruments and proce-
of a clear, widely recognized definition of AI. A notable ex-
dures for confirming that AI applications function as intended
ample is Japan’s acknowledgment of AI as being an ‘abstract’
and adhere to all applicable laws and regulations [Florian
concept and it is ‘difficult to strictly define the scope of artifi-
Königstorfer and Thalmann, S., 2022]. Appendix A (Table
cial intelligence in a broad sense’ [METI Japan, 2024], which
1) explains in detail how laws within the scope of this pa-
it rightfully is given that different kinds of AI have become
per address the WHO’s principle of documentation and trans-
specialized to particular use-cases, an example being GenAI
[Kanbach et al., 2023]. parency.
Per our analysis, the EU AI Act2 is one of the strongest
This ambiguity in AI definition has likely contributed to
acts declaring the requirement of technical documentation for
the field’s rapid growth and advancement [Peter Stone et al.,
high-risk AI systems to enable auditing, monitoring and en-
2016]. Figure 2 represents how AI is defined in different na-
suring reproducibility of AI outputs and processes.
tions.
A number of other regulations in other countries speak to
4.2 Common themes in global regulations the same principle (Table 1). Most laws in AI governance,
in healthcare and beyond, mention transparency and explain-
The common themes in global AI regulations have been ability as its requirements. However, the definition of trans-
outlined by the OECD [OECD, 2019]. Existing literature parency varies from ‘communication of appropriate informa-
[Reddy, 2023] discusses how general regulations on AI, while tion about an AI system to relevant people’ in the UK [De-
providing a broad framework, may not adequately address partment for Science, Innovation and Technology, 2023] to
the specific challenges of AI applications in healthcare. In ‘transparency of governance measures and systems used’ in
response to the growing country’s need to responsibly man- Brazil [Senate of Brazil, 2023]. Transparency is defined in a
age the rapid rise of AI health technologies, the WHO has more structured manner in the context of the healthcare sec-
responded to the need for frameworks on AI applications in tor by Canada, defining transparency as “the degree to which
healthcare [WHO, 2023b] as described in Figure 3. appropriate and clear information about a device (that could
The principles can be applied to the use of AI in healthcare
2
settings. To illustrate the relevance of these principles, let’s Chapter III, Article 11, EU AI Act, 2024
Figure 2: Table representing definitions of AI across nations
Figure 3: Key regulatory considerations as outlined by the WHO for ethical use of AI in healthcare
impact risks and patient outcomes) is communicated to stake- any. . . biases and errors. Careful design or prompt trou-
holders” [Health Canada, 2023]. bleshooting can help identify data quality issues early and
can prevent or mitigate possible resulting harm. Stakehold-
Risk Management
ers should also consider mitigating data quality issues and
The National Institute of Standards and Technology (NIST)
the associated risks that arise in health-care data, as well as
uses the definition of risk management as mentioned in ISO
continue to work to create data ecosystems to facilitate the
31000:2018 for AI systems: “Risk management refers to
sharing of good-quality data sources” [Salathé et al., 2018].
coordinated activities to direct and control an organization
Appendix A (Table 3) explains in detail how laws within the
with regard to risk” [National Institute of Standards and
scope of this paper address the WHO’s principle of data qual-
Technology (NIST), 2023]. The International Telecommu-
ity.
nication Union (ITU) Focus Group on Artificial Intelligence
As per our analysis, we find that Australia exemplifies
for Health (FG-AI4H) 3 elaborates on this thought by its
“data ecosystems” and “sharing of good-quality data sources”
recommendation of “a risk management approach that ad-
through its mention of the healthcare system and national in-
dresses risks associated with AI systems, such as cybersecu-
teroperability standards [University, 2023].
rity threats and vulnerabilities, underfitting, algorithmic bias
Japan and Rwanda also propose similar concepts. Japan
etc.” in the total product lifecycle of an AI system [Salathé
highlights an important concept of “converting data in a form
et al., 2018]. Appendix A (Table 2) explains in detail how
suitable for AI” and creation of “data economic zones” which
laws within the scope of this paper address the WHO’s prin-
will enable the use of AI for healthcare applications [Govern-
ciple of risk management.
ment of Japan, 2022]. Rwanda proposes an implementation
As per our analysis, risk management is being defined
plan for availability and accessibility to quality data through
across a spectrum by nations, with prescriptive guidance pro-
indicators such as size of open AI-ready data [Ministry of ICT
vided by Brazil on risk classification and risk impact as-
and Innovation, Rwanda, 2020].
sessment [Senate of Brazil, 2023] and Japan recommending
While data quality is essential for building accurate AI
“conducting audits in the AI utilization cycle” [Government
models, quality culture as an organization influences data
of Japan, 2022]. Risks linked to cybersecurity and privacy
management approaches [FDA, 2019]. UK has a similar
are highlighted by the UK [UK Government, 2021], while
approach as it speaks of using a data quality culture, ac-
pre- and post-market surveillance is highlighted in Canada’s
tion plans and root cause analysis to address data quality is-
approach towards medical devices [Health Canada, 2023].
sues at the source [UK Government, 2024]. The Framework
Rwanda [Ministry of ICT and Innovation, Rwanda, 2020] [UK Government, 2024] also speaks of data maturity mod-
and Egypt [for Economic Co-operation and (OECD), 2023]
els and metadata guidance to bring data quality to life. The
acknowledge AI risk assessment as a tool for responsible
European Health Data Space (EHDS-TEHDAS) data qual-
AI, while Singapore [(HSA), 2022] and India [of Medical
ity framework recommends more granular mechanisms of
Research (ICMR), 2023] have published technical guidance
data quality management [European Union, 2024]. Singapore
on process controls and change management. Saudi Arabia [(HSA), 2022], Hong Kong [HK Government, 2024] and In-
[Food and (SFDA), 2023] emphasizes involvement of a cross-
dia [of Medical Research (ICMR), 2023] also discuss quality
functional team for performing risk management.
of learning and training datasets for accurate validation.
Data Quality
Data quality is the extent to which a dataset satisfies the needs Intended Use, Analytical and Clinical Validation
of the user and is suitable for its intended purpose [Johnson The WHO points to the International Medical Device Reg-
et al., 2015]. While data quality issues can impact all mod- ulators Forum (IMDRF)’s definition of clinical evaluation
eling efforts, they are particularly problematic in healthcare [WHO, 2023a], which consists of valid clinical association,
[Hasan and Padman, 2006]. Data quality issues are particu- analytical validation, and clinical validation as quoted below:
larly challenging in healthcare due to the lack of standardized • “Valid clinical association: Is there a valid clinical as-
approaches for describing and handling such issues, the ab- sociation between your SaMD output and your SaMD’s
sence of a universal record storage model, the multitude of targeted clinical condition?
vocabularies and terminologies used, the inherent complex-
• Analytical validation: Does your SaMD correctly pro-
ity of healthcare data, and the ongoing evolution of medical
cess input data to generate accurate, reliable, and precise
knowledge [Simon and Aliferis, 2024].
output data?
The ITU FG-AI4H recommends that “developers should
consider whether available data are of sufficient quality to • Clinical validation: Does use of your SaMD’s accurate,
support the development of the AI system to achieve the in- reliable, and precise output data achieve your intended
tended purpose [Salathé et al., 2018]. Furthermore, devel- purpose in your target population in the context of clini-
opers should consider deploying rigorous pre-release evalu- cal care?”
ations for AI systems to ensure that they will not amplify On analysis of the jurisprudence within the scope of this
3
The Focus Group on Artificial Intelligence for Health (FG- paper, we found that national laws in this domain were lack-
AI4H) is a partnership of ITU and the World Health Organization ing. While there were technical guidance documents specific
(WHO) to establish a standardized assessment framework for the to AI applications in healthcare published in Canada [Health
evaluation of AI-based methods for health, diagnosis, triage or treat- Canada, 2023], Singapore [(HSA), 2022], Hong Kong [HK
ment decisions. Government, 2024], Saudi Arabia [Food and (SFDA), 2023],
and India [of Medical Research (ICMR), 2023], most of the is interesting to note, however, that the anonymization of data
technical documents by other agencies currently address AI does not guarantee privacy, with a study showing how people
as a subset of software and specific requirements are yet to be can be re-identified from an anonymized data collection by
updated. Appendix A (Table 4) explains in detail how laws providing their zip code, gender, and birthdate [Rocher et al.,
within the scope of this paper address the WHO’s principle of 2019].
intended use and clinical validation. Singapore [(HSA), 2022] emphasizes cybersecurity re-
The ITU FG-AI4H recommends the use of randomized quirements for connected medical devices, with focus on de-
clinical trials as the gold standard for evaluation of compara- sign controls, test reports, and traceability. Additionally, cy-
tive clinical performance, especially for the highest-risk tools bersecurity and privacy go hand in hand, an example being
or where the highest standard of evidence is required [ITU the UK’s ‘Plan for Digital Regulation’ [Department for Dig-
FG-AI4H, 2022]. It also associates documentation and trans- ital, Culture, Media & Sport (DCMS), 2022] and Saudi Ara-
parency with validation, mentioning training dataset compo- bia’s ‘Guidance on AI/ML based Medical Devices’ [Food and
sition and external analytical validation in an independent (SFDA), 2023] focusing on infrastructure security. Appendix
dataset. Currently, there are a number of international stan- A (Table 5) details how the jurisprudence in this paper relates
dards underway, such as ISO/IEC TC215 [ISOTC, 2024], to this principle.
IEEE P2802 [StanDict, 2024], and IEC/TC62 PT 63450
[IEC, 2024] which regulatory guidelines can later reference. 4.3 Regulations on AI in Healthcare
Singapore mentions the type of clinical evidence recom- Our analysis of the laws and regulations within the scope of
mended to support the clinical evaluation process for software this paper shows that countries are at different stages of de-
and AI-enabled medical devices, such as acceptance limits of veloping AI governance frameworks. Most nations are still
testing parameters [(HSA), 2022]. Saudi Arabia also notes in the strategy and policy stages of sector-agnostic generic
the absence of internationally aligned frameworks for clin- AI regulation, while healthcare agencies in specific countries
ical evaluation of AI/ML enabled medical devices and has have been seen to provide healthcare-specific guidance on AI
gone to reference the IMDRF recommendations, while spec- regulation.
ifying examples of metrics of clinical validation in intended Figure 4 provides a visual representation of the AI regu-
use environments, such as positive predictive value (PPV) and latory landscape in healthcare across the jurisdictions ana-
likelihood ratio negative (LR-) along with mentioning a value lyzed in this paper, illustrating the current state of AI pol-
greater than 0.81 as admissible for clinical validation [Food icy development in each region. Figure 6 provides a detailed
and (SFDA), 2023]. timeline of the laws analyzed in this paper and their relation
Another example of specific guidance in this domain is In- to AI regulation in healthcare. Figure 5 provides a holistic
dia’s recommendation of the use of Standard Protocol Items: overview of which jurisprudence in this paper is related to the
Recommendations for Interventional Trials–Artificial Intelli- WHO’s principles for regulation of AI in healthcare [WHO,
gence (SPIRIT-AI) and Consolidated Standards of Reporting 2023a](more information presented in Appendix A - Table 1,
Trials–Artificial Intelligence (CONSORT-AI) as frameworks Table 2, Table 3, Table 4, and Table 5).
for designing and running clinical assessment trials related As per our analysis, most documents discussed in this
to interventions with AI as a component [of Medical Re- paper touch upon WHO’s key principles, with internal ref-
search (ICMR), 2023]. Appendix A (Table 4) details how erences to international standards such as ISO and OECD.
the jurisprudence in this paper relates to this principle. Most documents discussed have been observed to converge
Privacy and Data Protection on WHO’s principles.
There have been a number of laws passed in the spirit of pri- The most active nations in AI regulation have been the US,
vacy and data protection, with the EU GDPR coming into ef- EU and China as per recent regulatory reports [Fritz, J. and
fect in 2018 [GDPR, 2018]. The GDPR’s data protection by Giardini, T., 2024]. However, our analysis shows that nations
design 4 [GDPR, 2018] are being echoed by other nations as in the Middle East and Southeast Asia are also picking up leg-
well, such as India’s proposed data privacy by design policy islation and policy centered around regulation of AI in health-
[Government of India, 2023]. Privacy impact assessments, a care. An example of this is India [Indian Council of Medical
popular approach for proactive privacy risk assessment and Research (ICMR), 2023], which calls out ethical principles
mitigation, are frequently included in privacy frameworks. for AI in healthcare. These principles include acceptable tests
Particular to health data, the European Health Data Space for clinical validation of AI used in healthcare and an ethics
(EHDS) [European Union, 2024] seeks to foster ownership checklist that covers participant recruitment methods used in
of healthcare data by individuals and builds further on the training models. Saudi Arabia has also taken a prescriptive
GDPR. approach through elaborating expectations and requirements
The WHO Global Strategy on Digital Health (2020–2025) of AI/ML based device manufacturers, such as clinical eval-
[World Health Organization, 2021] classifies health data as uation, risk management and quality management systems
sensitive personal data, or personally identifiable information, [Saudi Food and Drug Authority (SFDA), 2022].
that requires a high standard of safety and security. India’s African nations are picking up pace on framing policies for
ICMR guidelines [of Medical Research (ICMR), 2023] call AI regulation, with more focus on infrastructure development
out anonymization of data in line with the WHO strategy. It and user privacy. Rwanda has entered into contracts with
digital healthcare providers on AI-powered triage, symptom-
4
Articles 25 and 32 checking and cancer detection [AUDA-NEPAD, 2024]. More
details on the prescriptive nature of specific guidance docu- 5 Generative AI: The New Frontier
ments, laws, policies and regulations are provided in the sup-
plement (Tables 1, 2, 3, 4 and 5). 5.1 GenAI Regulation: Why Do We Need To
Regulate It Differently?
We observe that the WHO core principles [WHO, 2023a]
on AI regulation in healthcare have already been elucidated in Generative AI, unlike traditional AI, uses unsupervised learn-
various pre-existing standards for medical devices and phar- ing and generative models to create entirely new data that re-
maceuticals across nations: the principles of transparency, sembles training data [Hacker et al., 2023]. This makes it ex-
intended use, clinical validation, risk management and pri- tremely vulnerable to hallucinations, bias, and misuse [Fui-
vacy have been exhaustively talked about in standards such as Hoon Nah et al., 2023]. Neural network models, which are
ISO 13485:2016 and ISO 14971:2019 published previously. the core of GenAI, suffer from a lack of transparency and
The ITU FG-AI4H approach demarcates AI requirements for explainability, making it difficult to audit for biases and pri-
medical devices into general, pre-market, and post-market re- vacy violations [Salahuddin et al., 2022]. While AI gover-
quirements. This approach, which follows the structure of nance is picking up speed, regulations surrounding Gen AI
existing total product life cycle approaches for health ap- may need to be formed keeping these specific differences in
plications, demonstrates that AI/ML-enabled products have mind [Hacker et al., 2023].
both general requirements (like any other product) and AI- The rapid evolution of GenAI makes risk analysis a chal-
specific requirements that must be considered independently lenging topic when evaluating business potential, and thus,
[ITU FG-AI4H, 2022]. regulation difficult [TLR Health Europe, 2023]. In current
While new AI-specific legal instruments are emerging, use cases, most GenAI products are trained on structured
many countries are also incorporating AI regulation into ex- and unstructured healthcare data containing personal identi-
isting documents by addressing the additional requirements fiable information [Petrenko and Boloban, 2023]. Moreover,
necessary for AI. For example, Singapore, appended an ad- while GenAI has the potential to reduce the clinical adminis-
ditional section (Section 9) to its guidance on software as a trative burden on healthcare workers, inaccurate information
medical device (SaMD) [(HSA), 2022]. This may be an ef- can adversely affect patients [Harrer, 2023]. This underscores
fective stop-gap solution to regulate AI-enabled products in the need for clear regulation of GenAI in healthcare settings
[TLR Health Europe, 2023].
healthcare while more powerful AI laws are being developed.
The WHO’s paper on regulatory considerations on artificial
The convergence of opinion by most nations on regulation intelligence for health [WHO, 2023a] highlights how GenAI
of AI used in healthcare is a positive development, given the may already be violating the GDPR as summarized in Figure
differing opinions on generic AI regulation. For example, the 7.
EU takes a more proactive approach to AI regulation [Stahl As per the European Data Protection Board (EDPB), sev-
et al., 2022], whereas countries like Japan, South Korea and eral Supervisory Authorities have initiated data protection
Singapore are mostly prioritizing AI capability development investigations under Article 58(1)(a) and (b) GDPR against
and research [Radu, 2021]. In contrast, China has adopted OpenAI (developer of LLM called ChatGPT) in the context
a more “top-down” national strategy [Zeng, 2022]. Italy, on of the ChatGPT service [European Data Protection Board
the other hand, has been doubling down on privacy concerns (EDPB), 2024]. There has been a special task force desig-
as evidenced through its temporary ban on ChatGPT [Bolici nated for investigating how ChatGPT is positioned with re-
et al., 2024]. We believe that the divergent approach regard- spect to the principles of lawfulness, data collection, fair-
ing generic AI regulation can create a regulatory burden on ness, transparency, data accuracy, and subject rights [Euro-
companies using AI. Comprehensive, healthcare-specific AI pean Data Protection Board (EDPB), 2024]. The Australian
regulations are still needed [Reddy et al., 2020]. However, the Alliance for Artificial Intelligence in Healthcare (AAAiH)
current reliance on soft-law approaches [Palaniappan et al., also recommends communicating “the need for caution in
2024] allows for the flexibility and adaptability necessary for the clinical use of generative AI when it is currently untested
healthcare regulations to align with WHO guidelines [WHO, or unregulated for clinical settings, including the preparation
2023a]. of clinical documentation.” (Recommendation 4, AAAIH,
While many of the existing AI governance laws are over- 2024).
arching and cover multiple sectors including healthcare, spe- National and global regulatory bodies are struggling to
cific focus on regulating AI in healthcare is still a challenge keep pace with the rapid advancements in GenAI, as the
[Simon and Aliferis, 2024]. This is more relevant with the technology’s trajectory remains uncertain [TLR Health Eu-
rise of LLMs in healthcare, such as Med-PaLM, ChatDoctor rope, 2023]. Regulatory mechanisms for GenAI have been
and ClinicalBERT [Yang et al., 2023] which are at the fore- proposed advocating three layers of regulation: universal
front of medical diagnosis, treatment, patient education and technology-neutral regulation, regulation on high-risk appli-
clinical documentation. cations of GenAI rather than pre-trained models, and regula-
GenAI application in healthcare is expected to grow at a tion on information access [Hacker et al., 2023]. The chal-
CAGR of 35.14% between 2023 and 2032 [Precedence Re- lenge for regulatory authorities lies in anticipating the full
search, 2024], and over two-thirds of US physicians view scope of GenAI’s evolution and developing comprehensive
GenAI as beneficial in healthcare [Wolters Kluwer, 2024]. regulations that address its multifaceted implications [TLR
Regulation of GenAI used in healthcare requires a precise ap- Health Europe, 2023]. We have identified two nations (China
proach [Meskó and Topol, 2023]. and Singapore) with specific GenAI regulations [Luckett,
Figure 4: Visual representation of the current AI regulatory landscape in healthcare across 14 jurisdictions
Figure 5: Depiction of laws illustrating WHO’s core principles on ethical AI use in healthcare
Figure 6: Timeline presenting the evolution of AI legislation across 14 jurisdictions, categorizing each legislation as binding or non-binding
and sector-agnostic or healthcare-specific
Figure 7: Key instances of how large language models (LLMs) vio-
lated EU General Data Protection Regulations (GDPR)
2023; Soon and Tan, 2023] as discussed in Section 5.1 as Figure 8: Key regulations on Generative AI (GenAI) in China
representative examples of GenAI specific regulation.
jointly released the ‘Model AI Governance Framework for
5.2 Current Legislation on GenAI Generative AI’ [IMDA, 2024]. While this Framework fo-
Currently, we did not come across any legal jurisprudence cuses on the known topics of data quality, transparency, inci-
on GenAI used specifically in healthcare. However, there has dent reporting, security, safety and testing, it also focuses on
been some legal activity on GenAI as a whole. China and Sin- content provenance such as digital watermarking and cryp-
gapore are prominent examples of how GenAI-specific legis- tographic provenance [IMDA, 2024]. The collaboration has
lation has shaped its legal landscape. also proposed an ‘initial set of standardized model safety
evaluations for LLMs’ including domain-specific tests for
China medicine [IMDA, 2023a].
The Chinese government has been enacting a number of laws
to regulate GenAI. As shown in Figure 8, the common theme 6 Results and Conclusion
of these laws is their emphasis on regulating data from ille-
gal sources to train models, privacy and security, accountabil- In this paper, we analyzed 25 policy, strategy, and guid-
ity for content production, tagging GenAI generated content, ance based documents, laws and acts centered around AI
and complaint management [Wu, 2023]. A number of other in healthcare across 14 diverse legal jurisdictions and un-
standards have been released early in 2024 such as Draft stan- derscored a global drive towards responsible AI integration
dards for security specifications on generative AI pre-training within healthcare. While we analyzed most of the specific
and fine-tuning data processing activities (GenAI training regulation on AI in healthcare through non-binding instru-
data draft standards), Draft standards for security specifica- ments (6), this has both positive and negative consequences.
tions on GenAI data annotation (GenAI annotation draft stan- Non-binding approaches offer flexibility and can be easily
dards), and Basic security requirements for generative artifi- adapted to the evolving AI landscape. However, their vol-
cial intelligence service (GenAI standards) [Hurcombe and untary nature means organizations may choose not to adopt
Neo, 2024]. These standards also highlight protection of na- them.
tional security, intellectual property, and protection of indi- Our findings (Sections 4.2, 4.3, and Appendix A) highlight
vidual rights [Hurcombe and Neo, 2024]. It is noteworthy a shared commitment to aligning with the WHO’s ethical AI
that the Generative AI Measures apply extraterritorially, al- principles, indicating a promising trajectory for the future
lowing China to require non-compliant foreign generative AI of AI in healthcare. However, the variability in the specific
service providers operating in China to take necessary mea- strategies and the pace of adoption across regions emphasize
sures [Yan, 2024]. the need for ongoing international dialogue and cooperation
4.3.
Singapore Most regulations on AI broadly tackle fundamental prin-
Singapore had released its Model AI Governance Framework ciples common to most technologies (such as fairness, trans-
in 2019 to lay the groundwork for responsible use of AI. With parency, bias and privacy). Specific healthcare-centric AI reg-
the rise of GenAI, the AI Verify Foundation and Infocomm ulations have mostly been found to be proposed by specific
Media Development Authority of Singapore (IMDA) of Sin- healthcare regulatory bodies in the government, such as the
gapore released its ‘Discussion Paper on Generative AI: Im- FDA (US), Health Canada (Canada), MHRA (UK), ICMR
plications for Trust and Governance’ [IMDA, 2023b]. In re- (India) and others (Section 4.3). As explored in Section 4.2
sponse to the discussion paper, AI Verify and IMDA have and AppendixnA, we conclude how existing legislation con-
verges with WHO principles [WHO, 2023a]. We have iden- regulatory requirements thereby maximizing their potential
tified how emerging countries are also building requirements benefits for global health while mitigating potential risks.
as per WHO principles [WHO, 2023a] (Section 4.2). This ap-
proach has met our objective of focusing on global regulation A possibility of harmonization
(Section 3) and provides insights beyond existing literature The call for global harmonization of regulations–be it in phar-
literature (Section 2). maceuticals, biologics or medical devices–has been steadily
We have also analyzed regulations on 4 comparative pa- increasing within the industry over the years [Lindström-
rameters: sector-agnostic generic AI regulations, healthcare- Gommers and Mullin, 2019]. As highlighted in this paper, the
specific AI regulations, non-binding instruments and binding regulation of artificial intelligence (AI) in healthcare remains
legal instruments. We have examined the timeline of evo- in its early stages (Section 4.3), presenting a unique oppor-
lution of jurisprudence under the scope of this paper for 14 tunity for harmonization. The foundational principles out-
nations 6. lined by the WHO [WHO, 2023a] offer a promising frame-
To take a step further, we have discussed two examples of work for alignment. An example of this is the alignment of
countries framing policies around GenAI as GenAI promises the EU’s new AI Office, along with the UK’s AI Safety In-
to transform healthcare (Section 5). stitute, which could potentially interface to lead to a greater
degree of global harmonization of AI regulation. Another ex-
7 Future Directions ample is that of a first-of-a-kind international treaty adopted
by the Council of Europe (CoE): the Framework Conven-
We believe that regulations on AI in healthcare can develop tion on Artificial Intelligence and Human Rights, Democ-
as a three-pronged approach: racy and the Rule of Law (Convention) with 46 member
states with countries from all over the world being eligible
Collaboration by stakeholders to join it [Leslie et al., 2021]. Harmonization of regula-
We believe that regulatory bodies can refer to deliverables tions on AI in healthcare is yet to be seen, especially with
from focus groups such as the International Telecommunica- the rise of regulatory sandboxes and existing differences in
tion Union Focus Group on Artificial Intelligence for Health healthcare systems around the world [Leckenby et al., 2021;
(FG-AI4H). This particular group has published considera- Cancarevic et al., 2021]. However, given the diverse ap-
tions for manufacturers and regulators to conduct compre- proaches of individual countries, ranging from pro-innovation
hensive requirements analysis and streamlining conformity to pro-risk, achieving true harmonization may prove chal-
assessment procedures for continual product improvement in lenging [Thierer, 2023].
an iterative and adaptive manner [ITU FG-AI4H, 2022]. Such Experts are already concerned with the divergence in
technical guidance can ensure that specific considerations of sector-agnostic AI regulation and healthcare-specific regula-
AI in healthcare are addressed in regulatory discussions. A tions, an example being the EU AI Act and EU MDR being
number of international standards are under development at described as an “arranged marriage” and “conjoined twins”
the time of writing of this paper, such as ISO/TC 215 (Health [Regulatory Affairs Professionals Society (RAPS), 2024].
informatics) [ISOTC, 2024], ISO/IEC AWI TR 18988 (Artifi- Even within the AI space, a variety of definitions may lead
cial intelligence — Application of AI (technologies in health to greater confusion down the line. Since the regulation of AI
informatics) [for Standardization (ISO), 2024b] and ISO /CD is relatively new in the healthcare space, there is still time to
TS 24971-2 (Medical devices — Guidance on the applica- harmonize definitions, terminologies, and legislation related
tion of ISO 14971 Part 2: Machine learning in artificial in- to AI in healthcare. The next step in AI regulation would
telligence) [for Standardization (ISO), 2024a], to name a few be to issue healthcare-specific regulations and guidance that
which can be referred to by regulators and the healthcare in- resolve any inconsistencies between new and existing frame-
dustry. works.
The evolution of AI regulation with fast-paced changes
in technology [Digital Regulation Platform, 2024] can take Risk as the new focus
inspiration from the nature of regulations on drones, which As witnessed in the EU AI Act and WHO guidance on ethi-
evolved from being an unregulated technology to a highly cal use of LLMs in healthcare, a risk-based approach with a
regulated one in a short timeframe [Fenwick et al., 2017]. focus on accountability in different stages of the value chain
We can hope that regulators of AI will adapt to the fast-paced of development, deployment, and provision of AI systems is
nature of AI and develop sector-specific regulations in a short warranted [WHO et al., 2024]. The FDA’s recent inclusion
timeframe as well. of ISO 14971: 2019 as part of its updated Quality Manage-
By expanding global regulatory alliance and harmonizing ment System Regulations (QMSR) also echoes similar inten-
requirements, individual nation states can avoid regulatory tions in incorporating risk in systems [Kolton, 2024]. In our
blind spots, be more prescriptive about the expectations, and analysis, we note that multiple countries have mentioned risk
increase the speed of well-regulated, safe, and ethical innova- management and planning as key expectations (AppendixA).
tion. Including manufacturers in the process of harmonization It is of interest to see how AI validation tools help in con-
has also been called out by the WHO as a way to include all verting principles such as risk management to practice, with
stakeholders [WHO et al., 2024]. This approach will likely more than 230 tools for trustworthy AI spanning across the
reduce regulatory burden on manufacturers, healthcare sys- US and UK [Gunashekar et al., 2024]. Existing tools spe-
tems, and patients by decreasing avoidable variation in the cific to healthcare include Aival (for clinical users), Python
NLP (for biomedical literature), Google What-If (for analyz- In Tohid, H. and Maibach, H., editors, International medi-
ing model prediction changes with changes in dataset), and cal graduates in the United States, pages 45–67. Springer,
Optical Flow (medical imaging) to name a few [Gunashekar Cham.
et al., 2024]. OECD website can be a great starting point [Chae, 2020] Chae, Y. (2020). Us ai regulation guide: Leg-
for healthcare regulators to establish acceptable evidence pa- islative overview and practical considerations. The Journal
rameters and for industry members to validate AI systems of Robotics, Artificial Intelligence & Law, 3(1):17–40.
[OECD, 2024]. The validity of these tools is yet to be seen,
with studies revealing that AI auditing tools on the horizon [Char et al., 2018] Char, D. S., Shah, N. H., and Magnus, D.
may be questionable in their effectiveness [Graham et al., (2018). Implementing machine learning in health care -
2020]. addressing ethical challenges. The New England Journal
of Medicine, 378(11):981–983.
Ethical Statement [Daly et al., 2020] Daly, A., Hagendorff, T., Li, H., Mann,
M., Marda, V., Wagner, B., and Wang, W. W. (2020). Ai,
There are no ethical issues. governance and ethics: global perspectives. University of
Hong Kong Faculty of Law Research Paper, (2020/051).
Acknowledgments [Davenport and Kalakota, 2019] Davenport, T. and
We would like to extend our appreciation to AI-Global Health Kalakota, R. (2019). The potential for artificial in-
Initiative (AI-GHI) for providing the platform to connect with telligence in healthcare. Future healthcare journal,
diverse stakeholders. Recognized as an FDA Collaborative 6(2):94–98.
Community in 2019 and 2021-2024 (active), the AI-GHI [Department for Digital, Culture, Media & Sport (DCMS), 2022]
leverages their diverse background of stakeholders from med- Department for Digital, Culture, Media & Sport (DCMS)
ical device, pharmaceutical, biological, hospital, and research (2022). Plan for Digital Regulation: Developing an
sectors to identify the current and future needs of healthcare Outcomes Monitoring Framework 2022.
and provide guidance in how to navigate barriers and risks
around the implementation of AI/ML in all of healthcare. [Department for Science, Innovation and Technology, 2023]
Special thanks to Lacey Harbour for her invaluable help with Department for Science, Innovation and Technology
comments on this paper. (2023). Ai regulation: A pro-innovation approach. White
AC extends heartfelt gratitude to Susobhan Ghosh for his Paper.
invaluable assistance in formatting this paper in LaTeX and [Digital Regulation Platform, 2024] Digital Regulation
meticulous review of the formatting. Platform (2024). Digital regulation platform. Re-
trieved from https://digitalregulation.org/3004297-2/
References #post-3004929-endnote-ref-21.
[Abdullahi Tsanni, 2024] Abdullahi Tsanni (2024). [Dwivedi et al., 2021] Dwivedi, Y. K., Hughes, L., Ismag-
Africa’s push to regulate ai starts now. Retrieved from ilova, E., Aarts, G., Coombs, C., Crick, T., Duan,
https://www.technologyreview.com/2024/03/15/1089844/ Y., Dwivedi, R., Edwards, J., Eirug, A., Galanos, V.,
africa-ai-artificial-intelligence-regulation-au-policy/. Ilavarasan, P. V., Janssen, M., Jones, P., Kar, A. K., Kiz-
gin, H., Kronemann, B., Lal, B., Lucini, B., and Medaglia,
[AUDA-NEPAD, 2024] AUDA-NEPAD (2024). Regula- R. (2021). Artificial intelligence (ai): Multidisciplinary
tion and Responsible Adoption of AI in Africa Towards perspectives on emerging challenges, opportunities, and
Achievement of AU Agenda 2063. agenda for research, practice and policy. International
[Bajwa et al., 2021] Bajwa, J., Munir, U., Nori, A., and Journal of Information Management, 57:101994.
Williams, B. (2021). Artificial intelligence in healthcare: [Dı́az-Rodrı́guez et al., 2023] Dı́az-Rodrı́guez, N., Ser, J. D.,
transforming the practice of medicine. Future Healthcare Coeckelbergh, M., López, M., Herrera-Viedma, E., and
Journal, 8(2):e188–e194. Herrera, F. (2023). Connecting the dots in trustworthy
[Barredo Arrieta et al., 2020] Barredo Arrieta, A., Dı́az- artificial intelligence: From ai principles, ethics, and key
Rodrı́guez, N., Del Ser, J., Bennetot, A., Tabik, S., Bar- requirements to responsible ai systems and regulation. In-
bado, A., Garcı́a, S., Gil-López, S., Molina, D., Ben- formation Fusion, 99:101896–101896.
jamins, R., Chatila, R., and Herrera, F. (2020). Explainable [European Data Protection Board (EDPB), 2024] European
artificial intelligence (xai): Concepts, taxonomies, oppor- Data Protection Board (EDPB) (2024). Report of the
tunities and challenges toward responsible ai. Information work undertaken by the chatgpt taskforce. Retrieved
Fusion, 58:82–115. from https://www.edpb.europa.eu/system/files/2024-05/
[Bolici et al., 2024] Bolici, F., Varone, A., and Diana, G. edpb 20240523 report chatgpt taskforce en.pdf.
(2024). Unpopular policies, ineffective bans: Lessons [European Union, 2024] European Union (2024). European
learned from chatgpt prohibition in italy. In ECIS 2024 health data space.
Proceedings, volume 11. [FDA, 2019] FDA (2019). Proposed regulatory framework
[Cancarevic et al., 2021] Cancarevic, I., Plichtová, L., and for modifications to artificial intelligence/machine learn-
Malik, B. H. (2021). Healthcare systems around the world. ing (ai/ml)-based software as a medical device (samd) -
discussion paper and request for feedback. Last Accessed: [Goldberg et al., 2024] Goldberg, C. B., Adams, L., Blu-
22 May, 2024. menthal, D., Brennan, P. F., Brown, N., Butte, A. J.,
[FDA, 2024] FDA (2024). Artificial intelligence and ma- Cheatham, M., deBronkart, D., Dixon, J., Drazen, J.,
chine learning (ai/ml)-enabled medical devices. Last Ac- Evans, B. J., Hoffman, S. M., Holmes, C., Lee, P., Man-
cessed: 22 May, 2024. rai, A. K., Omenn, G. S., Perlin, J. B., Ramoni, R., Sapiro,
G., and Sarkar, R. (2024). To do no harm — and the most
[Fenwick et al., 2017] Fenwick, M. D., Kaal, W. A., and Ver- good — with ai in health care. NEJM AI, 1(3).
meulen, E. P. M. (2017). Regulation tomorrow: What hap-
[Government of India, 2023] Government of India (2023).
pens when technology is faster than the law? American
University Business Law Review, 6(3). Available at: http: Personal data protection act.
//digitalcommons.wcl.american.edu/aublr/vol6/iss3/1. [Government of Japan, 2022] Government of Japan (2022).
[Florian Königstorfer and Thalmann, S., 2022] Florian Ai strategy 2022. Outline of Japan’s AI policies and ini-
Königstorfer and Thalmann, S. (2022). Ai documenta- tiatives.
tion: A path to accountability. Journal of Responsible [Graham et al., 2020] Graham, L., Gilbert, A., Simons, J.,
Technology, 11:100043. Thomas, A., and Mountfield, H. (2020). Artificial intel-
[Food and (SFDA), 2023] Food, S. and (SFDA), D. A. ligence in hiring: Assessing impacts on equality. Institute
(2023). Guidance on ai/ml based medical devices. Quality for the Future of Work.
management systems and documentation for AI/ML based [Gunashekar et al., 2024] Gunashekar, S., van Soest, H., Qu,
medical devices. M., Politi, C., Aquilino, M. C., and Smith, G. (2024). Ex-
[for Economic Co-operation and (OECD), 2023] for Eco- amining the landscape of tools for trustworthy ai in the
nomic Co-operation, O. and (OECD), D. (2023). OECD uk and the us: Current trends, future possibilities, and po-
Artificial Intelligence Review of Egypt. An in-depth review tential avenues for collaboration. Retrieved from https:
of Egypt’s AI policies and initiatives by the OECD. //www.rand.org/pubs/research reports/RRA3194-1.html.
[for Standardization (ISO), 2024a] for Standardiza- [Hacker et al., 2023] Hacker, P., Engel, A., and Mauer, M.
tion (ISO), O. (2024a). Iso/cd ts 24971-2. (2023). Regulating chatgpt and other large generative ai
models. In Proceedings of the 2023 ACM Conference on
[for Standardization (ISO), 2024b] for Standardiza- Fairness, Accountability, and Transparency (FAccT ’23),
tion (ISO), O. (2024b). Iso/iec awi tr 18988. pages 1112–1123. Association for Computing Machinery.
[Fraser et al., 2023] Fraser, A. G., Biasin, E., Bijnens, B., [Harrer, 2023] Harrer, S. (2023). Attention is not all you
Bruining, N., Caiani, E. G., Cobbaert, K., Davies, R. H., need: the complicated case of ethically using large lan-
Gilbert, S. H., Hovestadt, L., Kamenjasevic, E., Kwade, guage models in healthcare and medicine. EBioMedicine,
Z., McGauran, G., O’Connor, G., Vasey, B., and Rade- 90:104512–104512.
makers, F. E. (2023). Artificial intelligence in medical de-
vice software and high-risk medical devices – a review of [Hasan and Padman, 2006] Hasan, S. and Padman, R.
definitions, expert recommendations and regulatory initia- (2006). Analyzing the effect of data quality on the ac-
tives. 20(6), pages 467–491. curacy of clinical decision support systems: a computer
simulation approach. In AMIA ... Annual Symposium pro-
[Fritz, J. and Giardini, T., 2024] Fritz, J. and Giardini, T. ceedings. AMIA Symposium, pages 324–328.
(2024). Emerging contours of ai governance and the three
layers of regulatory heterogeneity. Digital Policy Alert [Health Canada, 2023] Health Canada (2023). Draft
Working Paper 24-001. guidance: Pre-market guidance for machine
learning-enabled medical devices. Retrieved
[Fui-Hoon Nah et al., 2023] Fui-Hoon Nah, F., Zheng, R., from https://www.canada.ca/en/health-canada/
Cai, J., Siau, K., and Chen, L. (2023). Generative ai and services/drugs-health-products/medical-devices/
chatgpt: Applications, challenges, and ai-human collabo- application-information/guidance-documents/
ration. Journal of Information Technology Case and Ap- pre-market-guidance-machine-learning-enabled-medical-devices.
plication Research, 25(3):277–304. html.
[GDPR, 2018] GDPR (2018). General data protection regu- [HK Government, 2024] HK Government (2024). Tr-
lation (gdpr) – final text neatly arranged. 008:2024(e) medical device administrative control sys-
[Ghosh et al., 2024] Ghosh, S., Guo, Y., Hung, P.-Y., Cough- tem (mdacs) artificial intelligence medical devices (ai-
lin, L., Bonar, E., Nahum-Shani, I., Walton, M., and Mur- md) technical reference. Technical report, Department of
phy, S. (2024). rebandit: Random effects based online rl Health, The Government of the Hong Kong Special Ad-
algorithm for reducing cannabis use. arXiv e-prints, pages ministrative Region.
arXiv–2402. [(HSA), 2022] (HSA), H. S. A. (2022). Regulatory guide-
[Giovanola and Tiribelli, 2022] Giovanola, B. and Tiribelli, lines for software medical devices - a life cycle approach.
S. (2022). Beyond bias and discrimination: redefining Technical report, Health Sciences Authority (HSA). Guid-
the ai ethics principle of fairness in healthcare machine- ance document for the regulatory requirements of software
learning algorithms. AI & Society, 38(2):549–563. medical devices in Singapore.
[Hurcombe and Neo, 2024] Hurcombe, L. and Neo, versus practice. In Proceedings of the AAAI/ACM Confer-
H. Y. (2024). China ai trailblazers in genai ence on AI, Ethics, and Society (AIES ’20), pages 72–78.
standards in asia. DLA Piper. Retrieved from Association for Computing Machinery.
https://www.dlapiper.com/en/insights/publications/ [Leckenby et al., 2021] Leckenby, E., Dawoud, D., Bouvy,
2024/04/china-ai-trailblazers-in-genai-standards-in-asia. J., and Jónsson, P. (2021). The sandbox approach and its
[IEC, 2024] IEC (2024). Iec pt 63450 dashboard. potential for use in health technology assessment: A litera-
[IMDA, 2023a] IMDA (2023a). Cataloguing llm evalua- ture review. Applied Health Economics and Health Policy,
tions: Draft for discussion. Last Accessed: 22 May, 2024. 19(6):857–869.
[IMDA, 2023b] IMDA (2023b). Generative ai: Implications [Lehmann, 2021] Lehmann, L. S. (2021). Ethical challenges
for trust and governance. Last Accessed: 22 May, 2024. of integrating ai into healthcare. In Springer EBooks,
pages 1–6.
[IMDA, 2024] IMDA (2024). Proposed model ai governance
framework for generative ai: Fostering a trusted ecosys- [Leslie et al., 2021] Leslie, D., Burr, C., Aitken, M., Cowls,
tem. Last Accessed: 22 May, 2024. J., Katell, M., and Briggs, M. (2021). Artificial intelli-
gence, human rights, democracy, and the rule of law: a
[Indian Council of Medical Research (ICMR), 2023] Indian primer. arXiv preprint arXiv:2104.04147.
Council of Medical Research (ICMR) (2023). Ethical
guidelines for application of artificial intelligence in [Lindström-Gommers and Mullin, 2019] Lindström-
biomedical research and healthcare. Prepared by DHR- Gommers, L. and Mullin, T. (2019). International
ICMR Artificial Intelligence Cell. Retrieved from https: conference on harmonization: Recent reforms as a driver
//main.icmr.nic.in/sites/default/files/upload documents/ of global regulatory harmonization and innovation in med-
Ethical Guidelines AI Healthcare 2023.pdf. ical products. Clinical Pharmacology & Therapeutics,
105(4):926–931.
[ISOTC, 2024] ISOTC (2024). Iso/tc 215.
[Luckett, 2023] Luckett, J. (2023). Regulating generative
[ITU FG-AI4H, 2022] ITU FG-AI4H (2022). Good ai: A pathway to ethical and responsible implementation.
practices for health applications of machine learn- Journal of Computing Sciences in Colleges, 39(3):47–65.
ing: Considerations for manufacturers and regulators.
Retrieved from https://www.itu.int/dms pub/itu-t/opb/fg/ [Magrabi et al., 2019] Magrabi, F., Ammenwerth, E., Mc-
T-FG-AI4H-2022-2-PDF-E.pdf. Nair, J. B., De Keizer, N. F., Hyppönen, H., Nykänen,
P., Rigby, M., Scott, P. J., Vehko, T., Wong, Z. S.-Y.,
[Jiang et al., 2017] Jiang, F., Jiang, Y., Zhi, H., Dong, Y., Li, and Georgiou, A. (2019). Artificial intelligence in clin-
H., Ma, S., Wang, Y., Dong, Q., Shen, H., and Wang, Y. ical decision support: Challenges for evaluating ai and
(2017). Artificial intelligence in healthcare: past, present practical implications. Yearbook of Medical Informatics,
and future. Stroke and Vascular Neurology, 2(4):230–243. 28(01):128–134.
[Jindal et al., 2024] Jindal, J. A., Lungren, M. P., and Shah, [Meskó and Topol, 2023] Meskó, B. and Topol, E. J. (2023).
N. H. (2024). Ensuring useful adoption of generative arti- The imperative for regulatory oversight of large language
ficial intelligence in healthcare. Journal of the American models (or generative ai) in healthcare. npj Digital
Medical Informatics Association, 31(6):1441–1444. Medicine, 6:120.
[Johnson et al., 2015] Johnson, S. G., Byrne, M. D., Christie, [METI Japan, 2024] METI Japan (2024). (Draft)
B., Delaney, C. W., LaFlamme, A., Park, J. I., Pruinelli, AI guidelines for business. Retrieved from
L., Sherman, S. G., Speedie, S., and Westra, B. L. (2015). https://www.meti.go.jp/shingikai/mono info service/
Modeling flowsheet data for clinical research. In AMIA Jt ai shakai jisso/pdf/20240119 4.pdf.
Summits Transl Sci Proc, pages 77–81.
[Ministry of ICT and Innovation, Rwanda, 2020] Ministry
[Kanbach et al., 2023] Kanbach, D. K., Heiduk, L., Blueher, of ICT and Innovation, Rwanda (2020). National ai
G., Schreiter, M., and Lahmann, A. (2023). The genai is policy. Policy document outlining Rwanda’s national AI
out of the bottle: generative artificial intelligence from a strategy.
business model innovation perspective. Review of Man-
agerial Science. [MMR, 2024] MMR (2024). Artificial intelli-
gence in healthcare market size, growth, op-
[Karimian et al., 2022] Karimian, G., Petelos, E., and Silvia portunities & trends: Global industry analysis
(2022). The ethical issues of the application of artificial and forecast (2024-2030). Retrieved from https:
intelligence in healthcare: A systematic scoping review. //www.maximizemarketresearch.com/market-report/
AI and Ethics, 2(4):539–551. global-artificial-intelligence-ai-healthcare-market/
[Kolton, 2024] Kolton, E. A. (2024). Fda final rule harmo- 21261/.
nizes medical device quality system regulation with inter- [Murphy et al., 2021] Murphy, K., Di Ruggiero, E., Upshur,
national standard. Mondaq Business Briefing, pages NA– R., Willison, D. J., Malhotra, N., Cai, J. C., Lui, V., and
NA. Gibson, J. (2021). Artificial intelligence for good health:
[Krafft et al., 2020] Krafft, P. M., Young, M., Katell, M., A scoping review of the ethics literature. BMC Medical
Huang, K., and Bugingo, G. (2020). Defining ai in policy Ethics, 22(1).
[National Institute of Standards and Technology (NIST), 2023] [Reddy et al., 2020] Reddy, S., Allan, S., Coghlan, S., and
National Institute of Standards and Technology Cooper, P. (2020). A governance model for the applica-
(NIST) (2023). Artificial intelligence risk man- tion of ai in health care. Journal of the American Medical
agement framework (ai rmf 1.0). Retrieved from Informatics Association, 27(3):491–497.
https://doi.org/10.6028/nist.ai.100-1. [Regulatory Affairs Professionals Society (RAPS), 2024]
[Nestor Maslej et al., 2023] Nestor Maslej et al. (2023). Regulatory Affairs Professionals Society (RAPS) (2024).
The ai index 2023 annual report. Retrieved from Euro convergence: Experts concerned about incompat-
https://hai.stanford.edu/sites/default/files/2023-04/ ibilities between ai act and mdr. Retrieved from https:
HAI AI-Index-Report 2023.pdf. //www.raps.org/news-and-articles/news-articles/2024/5/
[OECD, 2019] OECD (2019). AI policy observatory portal. euro-convergence-experts-concerned-about-incompati.
Retrieved from https://oecd.ai/en/ai-principles. [Rocher et al., 2019] Rocher, L., Hendrickx, J. M., and
[OECD, 2024] OECD (2024). Catalogue of tools & met- de Montjoye, Y.-A. (2019). Estimating the success of
re-identifications in incomplete datasets using generative
rics for trustworthy AI. Retrieved from https://oecd.ai/en/
models. Nature Communications, 10(1).
catalogue/overview.
[Romagnoli et al., 2024] Romagnoli, A., Ferrara, F., Lan-
[of Medical Research (ICMR), 2023] of Medical Re-
gella, R., et al. (2024). Healthcare systems and artificial in-
search (ICMR), I. C. (2023). Ethical guidelines for
telligence: Focus on challenges and the international reg-
application of ai in biomedical research and healthcare.
ulatory framework. Pharmaceutical Research, 41(3):721–
Guidelines for the ethical application of AI in biomedical
730.
research and healthcare in India.
[Salahuddin et al., 2022] Salahuddin, Z., Woodruff, H. C.,
[Palaniappan et al., 2024] Palaniappan, K., Lin, E. Y. T., and
Chatterjee, A., and Lambin, P. (2022). Transparency of
Vogel, S. (2024). Global Regulatory Frameworks for the
deep neural networks for medical image analysis: A re-
Use of Artificial Intelligence (AI) in the Healthcare Ser-
view of interpretability methods. Computers in Biology
vices Sector. In Healthcare, volume 12, page 562. MDPI.
and Medicine, 140:105111–105111.
[Peter Stone et al., 2016] Peter Stone et al. (2016). Artifi-
[Salathé et al., 2018] Salathé, M., Wiegand, T., and Wenzel,
cial intelligence and life in 2030. One Hundred Year
M. (2018). Focus group on artificial intelligence for health.
Study on Artificial Intelligence: Report of the 2015-2016
arXiv preprint arXiv:1809.04797.
Study Panel. Retrieved from http://ai100.stanford.edu/
2016-report. [Saudi Food and Drug Authority (SFDA), 2022] Saudi Food
[Petrenko and Boloban, 2023] Petrenko, A. and Boloban, O. and Drug Authority (SFDA) (2022). Guidance on artificial
intelligence (ai) and machine learning (ml) technologies
(2023). Generalized information with examples on the
based medical devices. Retrieved from https://www.sfda.
possibility of using a service-oriented approach and arti-
gov.sa/sites/default/files/2023-01/MDS-G010ML.pdf.
ficial intelligence technologies in the industry of e-health.
Technology Audit and Production Reserves, 4(2 (72)):10– [Schiff et al., 2020] Schiff, D., Biddle, J., Borenstein, J., and
17. Laas, K. (2020). What’s next for ai ethics, policy, and
[Precedence Research, 2024] Precedence Research (2024). governance? a global overview. In Proceedings of the
AAAI/ACM Conference on AI, Ethics, and Society.
Generative ai in healthcare market size, growth report
2032. Last Accessed: 22 May, 2024. [Senate of Brazil, 2023] Senate of Brazil (2023). Bill no.
[Radu, 2021] Radu, R. (2021). Steering the governance of 2338/2023. In progress.
artificial intelligence: National strategies in perspective. [Simon and Aliferis, 2024] Simon, G. J. and Aliferis, C.
Policy and Society, 40(2):178–193. (2024). Artificial Intelligence and Machine Learning in
[Rahman et al., 2024] Rahman, M. A., Victoros, E., Ernest, Health Care and Medical Sciences. Springer International
Publishing.
J., Davis, R., Shanjana, Y., and Islam, M. R. (2024). Im-
pact of artificial intelligence (ai) technology in healthcare [Smith, 2021] Smith, H. (2021). Clinical ai: Opacity, ac-
sector: A critical evaluation of both sides of the coin. Clin- countability, responsibility and liability. AI & Society,
ical Pathology (Thousand Oaks, Ventura County, Calif.), 36(4):535–545.
17:2632010X241226887. [Soon and Tan, 2023] Soon, C. and Tan, B. (2023). Regulat-
[Reddy, 2023] Reddy, S. (2023). Navigating the ai revolu- ing artificial intelligence: Maximising benefits and min-
tion: the case for precise regulation in health care. Journal imising harms.
of Medical Internet Research, 25:e49989. [Stahl et al., 2022] Stahl, B. C., Rodrigues, R., Santiago, N.,
[Reddy, 2024] Reddy, S. (2024). Generative ai in healthcare: and Macnish, K. (2022). A european agency for artifi-
an implementation science informed translational path on cial intelligence: Protecting fundamental rights and ethical
application, integration and governance. Implementation values. Computer Law & Security Review, 45:105661–
Science, 19(1):27. 105661.
[StanDict, 2024] StanDict (2024). Ieee p2802 - standard for [Yan, 2024] Yan, W. (2024). Do not go gentle into that
the performance and safety evaluation of artificial intelli- good night: The european union’s and china’s different ap-
gence based medical device: Terminology. proaches to the extraterritorial application of artificial in-
[Thierer, 2023] Thierer, A. D. (2023). Flexible, pro- telligence laws and regulations. Computer Law & Security
innovation governance strategies for artificial intelligence. Review, 53:105965.
R Street Policy Study, 283. Retrieved from https://ssrn. [Yang et al., 2023] Yang, R., Tan, T. F., Lu, W.,
com/abstract=4423897. Thirunavukarasu, A. J., Ting, D. S. W., and Liu, N.
[TLR Health Europe, 2023] TLR Health Europe (2023). (2023). Large language models in health care: develop-
Embracing generative ai in health care. The Lancet Re- ment, applications, and challenges. Health Care Science,
gional Health-Europe, 30:100677. 2:255–263.
[UK Government, 2021] UK Government (2021). National [Zeng, 2022] Zeng, J. (2022). China’s ai approach: A top-
ai strategy. UK Government’s strategy on artificial intelli- down nationally concerted strategy? In Artificial Intelli-
gence. gence with Chinese Characteristics. Palgrave Macmillan,
Singapore.
[UK Government, 2024] UK Government (2024). The gov-
ernment data quality framework.
[University, 2023] University, M. (2023). Artificial intel-
ligence and the future of work: National agenda and
roadmap. National agenda and roadmap for AI and the
future of work.
[Vayena et al., 2018] Vayena, E., Blasimme, A., and Cohen,
I. G. (2018). Ai4people’s ethical framework for a good ai
society: Machine learning in medicine: Addressing ethical
challenges. PLOS Medicine, 15(11):e1002689.
[Viswa et al., 2024] Viswa, C. A., Bleys, J., Leydon, E.,
Shah, B., and Zurkiya, D. (2024). Generative AI in the
pharmaceutical industry: Moving from hype to reality.
[Walter, 2024] Walter, Y. (2024). Managing the race to the
moon: Global policy and governance in artificial intelli-
gence regulation—a contemporary overview and an anal-
ysis of socioeconomic consequences. Discovery of Artifi-
cial Intelligence, 4(14).
[Wang and Preininger, 2019] Wang, F. and Preininger, A.
(2019). Ai in health: State of the art, challenges, and future
directions. Yearbook of Medical Informatics, 28(01):16–
26.
[WHO, 2023a] WHO (2023a). Regulatory considerations on
artificial intelligence for health. Geneva: World Health
Organization; Licence: CC BY-NC-SA 3.0 IGO.
[WHO, 2023b] WHO (2023b). WHO outlines considera-
tions for regulation of artificial intelligence for health.
[WHO et al., 2024] WHO et al. (2024). Ethics and gov-
ernance of artificial intelligence for health: guidance on
large multi-modal models. WHO News.
[Wolters Kluwer, 2024] Wolters Kluwer (2024).
Wolters kluwer survey: Over two-thirds of u.s.
physicians have changed their mind, now view-
ing genai as beneficial in healthcare. Retrieved
from https://www.wolterskluwer.com/en/news/
gen-ai-clincian-survey-press-release#downloads.
[World Health Organization, 2021] World Health Organiza-
tion (2021). WHO Global Strategy on Digital Health
(2020–2025).
[Wu, 2023] Wu, Y. (2023). China’s Interim Measures to Reg-
ulate Generative AI Services: Key Points.
A Supplemental Tables
5
The TEHDAS1 project (ended in July 2023) developed joint European principles for the secondary use of health data. The work involved
25 countries. The TEHDAS2 joint action started in May 2024 and it will build on the work of previous TEHDAS1.
Health Canada: Premarket guidance Draft “From our perspective, MLMD
for ML-enabled MD (Canada) lifecycle includes. . . design, testing
and evaluation, clinical validation,
post-market performance monitoring”
“The intended use or medical purpose
should be made clear in the appli-
cation. . . including device function
information.”
EU AI Act (EU) In March 2024, the European Parlia- Article 17: Quality management sys-
ment voted 71-8 to formally adopt the tem: “examination, test and validation
agreed text of the AI Act. Expected procedures to be carried out before,
to be officially published in May/June during and after the development of the
2024. high-risk AI system” Annex IV: Tech-
nical documentation referred to in Ar-
ticle 11(1): the validation and testing
procedures used” Article 3 (53): ‘real-
world testing plan’ means a document
that describes the objectives, methodol-
ogy, geographical, population and tem-
poral scope, monitoring, organisation
and conduct of testing in real-world
conditions;
Regulatory Guidelines for Software Published Section 3.5 (Clinical evaluation): “The
Medical Devices – A Life Cycle Ap- clinical evaluation process establishes
proach (Singapore) that there is a valid clinical association
between the software output and the
specified clinical condition according
to the product owner’s intended use.”
“Test protocol and report for verifica-
tion and validation of the AI-MD, in-
cluding the acceptance.”
Medical Device Administrative Con- Published Section 4.1: Performance and Clinical
trol System (MDACS) AI Medical De- Validation: “Validation and verification
vice TR-008 (Hong Kong) test report(s) shall be provided to sub-
stantiate such performance claim (e.g.
diagnostic sensitivity, diagnostic speci-
ficity, accuracy).”
Guidance on AI/ML based Medical Published Clinical evaluation: “A manufacturer
Devices (Saudi Arabia) of AI/ML-based medical devices is
expected to provide clinical evidence
of the device’s safety, effectiveness
and performance before it can be
placed on the market.” Analytical val-
idation: “Analytical validation should
be done using large independent refer-
ence dataset reflecting the intended pur-
pose and the diversity of the intended
population and setting.” Intended use:
“If the Artificial Intelligence (AI) and
Machine Learning (ML) devices are in-
tended by the Product developer to be
used for investigation, detection diag-
nosis, monitoring, treatment, or man-
agement of any medical condition, dis-
ease, anatomy or physiological process,
it will be classified as a medical device
subject to SFDA’s regulatory controls.”
Ethical Guidelines for application of AI Published Section 1.6: Optimization of data qual-
in biomedical research and healthcare ity: “These inherent problems related
(India) to data can be minimized by rigor-
ous clinical validation before any AI-
based technology is used in health-
care.” Section 1.10: “AI technology in
healthcare must undergo rigorous clin-
ical and field validation before applica-
tion on patients/participants.” Section 2
of the document, “Guiding Principles
for stakeholders involved in develop-
ment, validation and deployment” de-
scribes in detail how AI-based solu-
tions for healthcare must be validated.
Section 2.2 describes guiding princi-
ples for analytical and clinical valida-
tion.
6
The TEHDAS1 project (ended in July 2023) developed joint European principles for the secondary use of health data. The work involved
25 countries. The TEHDAS2 joint action started in May 2024 and it will build on the work of previous TEHDAS1.