Governance of AI Technologies in Financial Services
Governance of AI Technologies in Financial Services
com | #24Fintech
Governance of AI
Technologies in the
Financial Services
Report of the Roundable held during
24 Fintech on 3rd September, 2024
1
Roundtable Discussion on the
“Governance of AI Technologies in
Financial Services” at 24 Fintech
2
Contributors
Governance of AI Technologies
in the Financial Services
CO-CHAIRS
SPEAKERS
3
Contents
Governance of AI Technologies
in the Financial Services
2 11
Preface PART 2
AI Governance Across Regions:
Key Objectives and Approaches
3 16
Contributors PART 3
Regulatory Objectives and
Emerging Challenges
5 20 28
Executive PART 4 Conclusion
Summary Balancing Ethical Concerns
with Innovation in AI
7 24 29
PART 1 PART 5 Appendix
Are we getting close to Future of AI Governance – List of Researchers
future-proof AI regulations? Preparing for Advanced AI
4
Executive Summary
A decade ago, the financial services sector was only beginning to explore the potential of AI,
with expectations of gradual, incremental integration. Today, however, AI has rapidly evolved
into a transformative force, with its impact accelerating faster than anticipated, especially
within financial services. The “Governance of AI technologies in financial services” report
offers vital insights into AI’s evolving role, addressing regulatory challenges, ethical dilemmas,
and the pressing need for collaborative governance frameworks.
The report summarises key discussions among industry leaders, regulators, and AI experts,
highlighting the opportunities and complexities AI brings to financial services. It emphasises
the urgent need for governance models that can balance innovation with ethical responsibility.
While some experts are optimistic about the progress of AI governance, many argue that
existing frameworks fall short of managing AI’s rapid evolution. This divergence reflects
the challenge of regulating AI systems capable of learning, adapting, and autonomously
influencing critical financial decisions.
The report highlights the significant engagement from all key stakeholders – regulators, AI
developers, and financial institutions – around AI regulation. With AI systems becoming more
integrated into financial processes, the need for robust and flexible regulations has attracted
wide attention, making it a top priority across the industry.
While progress has been made in developing AI regulations, many frameworks remain reactive.
The report emphasises the need for adaptive governance models that can evolve alongside AI
technologies to ensure they remain relevant in the face of future advancements.
The report highlights significant differences in how regions such as the EU, UK, Saudi Arabia,
and Singapore approach AI regulation. These variations reflect different priorities, with some
regions focusing on innovation and flexibility, while others emphasise consumer protection and
stricter ethical standards.
Ethical AI Is Paramount
One of the most pressing concerns is ensuring that AI systems are transparent, accountable,
and free from bias. The roundtable stressed the importance of embedding ethical
considerations into AI frameworks, particularly in high-risk areas like credit scoring and lending.
AI-specific regulatory sandboxes have emerged as a critical tool for fostering innovation while
ensuring regulatory oversight. These controlled environments allow financial institutions and
regulators to collaborate on testing AI technologies, helping shape effective governance.
5
Executive Summary
A critical issue raised is the need for upskilling both industry professionals and regulators. As AI
systems become more complex, financial institutions and regulatory bodies must ensure they
have the expertise to manage and govern AI-driven solutions effectively.
Collaboration Is Key
The report concludes that the governance of AI in financial services requires a multi-stakeholder
approach. Effective AI governance must be flexible, inclusive, and future-ready, ensuring
that the industry can harness AI’s potential while safeguarding against its risks. This report
offers a foundation for ongoing dialogue and future research, urging leaders to prioritise both
innovation and ethical governance as AI continues to reshape the financial landscape.
6
PART 1
7
Are we getting close to
future-proof AI regulations?
The roundtable discussion on AI governance began with a critical question: Are we getting
close to future-proof AI regulations? This sparked a lively and urgent debate, as experts
tackled the challenge of managing a technology that evolves faster than most frameworks
can keep pace. Some expressed optimism, highlighting recent strides toward adaptable,
forward-looking policies. However, others warned that many current regulations remain
reactive, struggling to keep up with AI’s rapid advancements.
Yes No
40% 60%
Some speakers were optimistic, highlighting that although AI regulation is in its early stages,
significant progress is being achieved through international initiatives. These frameworks are
gradually aligning with local contexts, laying the groundwork for comprehensive, future-proof
AI regulations. However, they noted this progress is still in its “baby step” phase, requiring
ongoing adaptation to keep up with technological advancements.
Others expressed scepticism, particularly in regard to the European Union’s regulatory efforts.
While the EU AI Act is a positive step, speakers pointed out its lack of specificity in addressing
the nuances of AI applications. Some felt that current frameworks are too rigid and lack the
flexibility needed to govern the full complexity of AI.
One speaker introduced a helpful framework to understand the regulatory landscape, breaking
it down into four key areas: macro issues, sector-specific challenges, abuse of AI, and cultural
and human capital considerations. These areas illustrate the diverse concerns AI regulation
must address, highlighting that no single regulatory framework can comprehensively manage
every aspect of AI governance.
8
Are we getting close to
future-proof AI regulations?
Cultural and
Abuse of AI Human Capital
Considerations
Macro Issues
AI’s potential to reshape employment, infrastructure, and data management raises broad
societal concerns, including its impact on national strategies and the singularity. These issues
affect all sectors and the economy as a whole.
Sector-Specific Challenges
Industries like financial services and healthcare face unique AI-related concerns, such as
fairness, data privacy, and preventing abuses. AI systems must be explainable, responsible, and
compliant with regulations specific to each sector.
Abuse of AI
The misuse of AI for cybercrime, fraud, and illegal activities is a critical concern. Effective
regulation requires cybersecurity frameworks to address malicious AI use, as traditional
regulations may not suffice.
Society’s adaptation to AI includes education and upskilling, ensuring individuals are prepared
to work alongside AI systems and adapt culturally to new technologies.
9
Are we getting close to
future-proof AI regulations?
One speaker argued against the idea of “future-proof” regulations, emphasising that AI’s
rapid innovation makes it impossible to create long-lasting rules. Instead, they advocated
for adaptable, responsive regulatory frameworks that manage risks without stifling
innovation. Overly rigid or prescriptive regulations could lead to instability, particularly if
they change unpredictably.
In summary, speakers agreed on the need for adaptable, flexible regulations that support
AI innovation while managing risks. The challenge is balancing technological growth with
regulatory protection.
The “Law of the Horse” is a metaphor used in legal studies, particularly in the context
of cyberlaw, that critiques the idea of creating a specialised legal discipline for rapidly
evolving technologies. This concept was famously discussed by Judge Frank H.
Easterbrook in a 1996 lecture at the University of Chicago, where he argued against the
idea of developing a distinct body of law for the internet, which he referred to as “cyberlaw.
Easterbrook suggested that there is no more a “law of the horse” than there is a “law of
cyberspace.” He argued that just as legal issues concerning horses should be addressed by
general principles of law (like property, contract, or tort law), so should issues arising from
the internet. He believed that creating a specialised area of law for every new technology
or domain is unnecessary and inefficient, as it would fragment legal education and
practice without adding substantive value.
Source: Frank H. Easterbrook: Cyberspace and the Law of the Horse and Lawrence Lessig: The Law of the
Horse: What Cyberlaw Might Teach
10
PART 2
AI Governance Across
Regions: Key Objectives
and Approaches
11
AI Governance Across Regions:
Key Objectives and Approaches
Saudi Arabia
A speaker highlighted that Saudi Arabia views AI as a national priority, with significant focus
placed on AI education and infrastructure development. The Kingdom’s approach spans from
education at all levels to building AI infrastructure and ensuring data sovereignty. Significant
investments in local infrastructure, including data centers from companies like Google,
Microsoft, and Oracle, address regulatory challenges related to cloud usage and cross-border
data flows. Key initiatives, such as the Saudi Data and AI Authority’s (SDAIA) National AI Ethics
Checklist and partnerships with UNESCO, support responsible AI use.
United Kingdom
European Union
The EU’s AI Act emphasises consumer protection, transparency, and fairness, particularly in
financial services. It imposes strict requirements on AI applications, such as credit scoring,
ensuring they respect individuals’ rights. The regulation is detailed, especially for high-risk AI
systems in sectors like finance, where decisions can impact the impact assessment.
12
AI Governance Across Regions:
Key Objectives and Approaches
Singapore
Fairness Ethics
• Accuracy • Beneficence
• Bias • Human-centered
• Model agnostic • User privacy
• Justice AI
Governance
Transparency
Accountability • Explainability
• Regulatory compliace • Justifiability
• Trustworthiness • Reproductibility
Japan
13
AI Governance Across Regions:
Key Objectives and Approaches
Common Objectives
Consumer Protection
One of the most universally shared goals across regions was consumer protection, especially
in areas where AI is used to make decisions in sectors like lending, hiring, and credit scoring.
A common theme among the speakers was ensuring fairness and transparency in AI-driven
decision-making. “AI must assess the impact on fundamental rights,” emphasises the need
to safeguard consumers from biased or opaque algorithms. This focus is especially strong in
the EU and UK, where strict regulations and outcomes-based frameworks aim to mitigate
risks to consumers.
Risk Management
Speakers agreed that these AI models, which evolve over time, pose unique challenges for risk
management. The complexity lies in the fact that these models require constant revalidation
as they learn from new data inputs. This continuous evolution introduces additional layers of
oversight, making it difficult for regulators and financial institutions to ensure that the models
remain compliant and accurate.
Data security, particularly data sovereignty, is a key focus with regional variations. In Saudi
Arabia, data sovereignty is crucial due to concerns about cloud usage and localising data, with
efforts to build local AI infrastructure. Meanwhile, the EU emphasises data protection through
regulations like GDPR and the upcoming AI Act, ensuring careful handling of data in AI systems.
Key Differences
Regulatory Flexibility
Regions like the UK and Singapore favor flexible, principles-based approaches that adapt to
evolving AI technology, promoting innovation with oversight. In contrast, the EU and Japan
have stricter, more prescriptive regulations, especially for high-risk AI systems, offering greater
protection in sectors like finance where AI’s impact is significant.
14
AI Governance Across Regions:
Key Objectives and Approaches
Sectoral Focus
Debate arose over the focus of AI regulation. Japan emphasises AI model risk management
specifically within the financial sector, ensuring accuracy and fairness. In contrast, Saudi
Arabia adopts a broader approach, focusing on cross-sectoral AI infrastructure development.
Opinions differed on whether a focused or broad approach would deliver better long-term
results for AI governance.
Sector-Specific AI Regulations
Financial Services
AI’s growing role in areas like fraud detection, credit scoring, and algorithmic trading brings
significant regulatory challenges. Several speakers highlighted concerns about model
risk management, particularly with continuously learning AI models. The rapid pace of AI
development complicates the validation and verification of these models, especially as they
evolve beyond traditional rule-based systems, as seen in credit scoring.
Another speaker emphasised that, much like the statistical modeling techniques financial
services have long relied on before AI, AI-driven creditworthiness assessments require strict
oversight to ensure fairness and transparency. The speaker highlighted the importance of
preventing discrimination or biased outcomes, particularly in lending practices.
Data Privacy
The issue of data privacy sparked considerable debate among the speakers, with regional
differences coming to the forefront. In the EU, laws like GDPR heavily influence how AI
systems operate, enforcing strict data protection, especially across borders. While many
praised GDPR’s robust protections, some voiced concerns that its stringent rules could
hinder innovation by limiting data usage.
15
PART 3
16
Regulatory Objectives and
Emerging Challenges
During the roundtable discussion, speakers emphasised the core regulatory objectives
driving AI governance in financial services. While these objectives are well-established,
many speakers agreed that they need to be adapted to the unique complexities introduced
by AI-driven systems.
AI plays a key role in detecting suspicious activities by identifying patterns in vast datasets.
While AI is powerful, speakers acknowledged its limitations in addressing emerging threats.
Ensuring AI systems adapt to new forms of financial crime while maintaining accuracy
remains a challenge.
Consumer Protection
17
Regulatory Objectives and
Emerging Challenges
Market Integrity
AI in algorithmic trading presents both opportunities and risks. While it can make markets
more efficient, it also increases the potential for manipulation. One speaker emphasised
the need for robust regulatory frameworks to monitor AI systems and prevent
market destabilisation.
Financial Stability
Speakers emphasised the risk of AI amplifying societal biases, especially in areas like lending,
hiring, and insurance. There was consensus on the need for strict regulatory safeguards to
prevent biased algorithms from perpetuating inequalities. One speaker stressed that AI
systems must have proper checks to avoid discriminatory practices.
Explainability
The oversight of continuously learning AI models, which adapt over time, was highlighted as
a significant challenge. Unlike static models, these systems evolve, making them harder to
validate and audit. A speaker noted the difficulty for regulators in ensuring these AI models
remain compliant with standards while fostering innovation.
18
Regulatory Objectives and
Emerging Challenges
The issue of cross-border data flows and data sovereignty was a recurring theme in the
discussion. AI systems often rely on large datasets, which can be sourced from multiple
countries, each with its own privacy and data protection laws. Differences in regulations, such
as the EU’s GDPR and Saudi Arabia’s data sovereignty laws, create barriers to the development
of unified AI frameworks. Speakers acknowledged that addressing these disparities is critical
for fostering international cooperation and enabling the safe and secure operation of AI
systems across borders.
Future-Proofing AI Regulations
One of the central points of agreement was that AI regulations must be flexible and
adaptable. The speakers acknowledged that the AI of today will not be the AI of tomorrow,
with advancements like generative AI and quantum computing on the horizon. While some
advocated for a reactive approach, adjusting regulations alongside advancements, others
stressed that regulation should not hinder innovation. The key takeaway was the importance
of creating adaptable guidelines that balance innovation with the risks posed by emerging
technologies like quantum computing, ensuring regulations remain relevant as AI evolves.
19
PART 4
20
Balancing Ethical Concerns
with Innovation in AI
Accountability
Speakers agreed that AI systems risk reinforcing societal biases, particularly in financial areas
like lending and hiring. While AI can improve efficiency, it may perpetuate discrimination
if not properly regulated. Safeguards are essential to ensure fairness and prevent biased
algorithms from harming marginalised individuals.
21
Balancing Ethical Concerns
with Innovation in AI
Accountability
Assigning responsibility for harmful AI decisions was also debated, as AI’s complexity
complicates accountability. Financial institutions must establish clear accountability lines
to address errors or biases. Regulating accountability for evolving AI models is a significant
challenge that must be addressed to protect consumers and maintain ethical standards.
Regulatory Sandboxes
Building on the earlier discussion of AI-specific regulatory sandboxes (page 19), speakers
further emphasised their role as key tools for enabling AI innovation while maintaining
oversight. These controlled environments allow AI technologies to be tested and developed
under regulatory supervision, balancing innovation with consumer protection and financial
stability. Sandboxes provide opportunities for regulators and developers to learn and adapt,
ensuring regulations stay relevant. However, there was caution that sandboxes should not be
overly restrictive, as this could hinder innovation.
Public-Private Partnerships
22
Balancing Ethical Concerns
with Innovation in AI
As AI technologies continue to evolve, new ethical challenges emerge that require ongoing
attention and adaptation by regulators and industry stakeholders.
The complexity of monitoring AI models for bias, especially as they continuously learn, was
a key concern. While AI can streamline decision-making, it risks reinforcing societal biases,
particularly in areas like lending and credit scoring. Speakers emphasised the need for
ongoing monitoring and retraining to ensure fairness, as failure to address biases could lead to
discriminatory practices in financial services.
Ethical Enforcement
Ensuring ethical compliance was another challenge. While regulations exist, speakers stressed
the importance of enforceable guidelines. Developing effective mechanisms for auditing AI
systems and holding parties accountable for ethical breaches is crucial, especially given the
autonomous nature of AI. Robust frameworks for audits and oversight are needed to ensure
compliance with ethical standards.
The discussion also underscored the importance of building a workforce capable of navigating
the ethical and regulatory challenges posed by AI technologies.
Upskilling Regulators
The discussion stressed the need for regulators to gain AI-specific expertise, particularly in areas
like machine learning and continuous learning models. Without this knowledge, regulators
may struggle to oversee rapidly evolving AI systems, risking unchecked use in financial sectors.
One of the speakers highlighted the challenge of ensuring that regulators not only
understand the technical aspects of AI but are also equipped to apply this knowledge in
real-world oversight scenarios. Several speakers emphasised the importance of training and
development programmes that allow regulators to keep up with the evolving nature of AI
technologies. It was broadly agreed that ongoing education is critical for effective governance,
with a focus on bridging the knowledge gap between technologists and policymakers.
Collaboration between industry, academia, and regulators is crucial for building an AI-ready
workforce. Initiatives like Saudi Arabia’s AI education programmes serve as models for training
professionals to develop and regulate AI systems responsibly, aligning with ethical standards
and regulatory requirements.
23
PART 5
Future of AI Governance –
Preparing for Advanced AI
24
Future of AI Governance –
Preparing for Advanced AI
The roundtable continued with Ray Kurzweil’s concept of the Law of Accelerating Returns,
as presented in his book The Singularity Is Near: When Humans Transcend Biology. Kurzweil
predicts that the 21st century will see 1,000 times the progress achieved in the 20th century
due to the exponential growth of technology.
The discussion began with differing views on whether superintelligence could emerge within
the speakers’ lifetimes. While many felt artificial general intelligence (AGI) was becoming
more plausible, driven by advances in AI like large language models (LLMs), uncertainty
remained about future developments. One speaker suggested that traditional regulation
might be inadequate for AGI, proposing a constitution for AI as a legal framework to guide
governance. However, skepticism was voiced about current AI capabilities, with concerns
about the influence of large corporations over AI’s development, making effective governance
challenging despite regulatory efforts.
Artificial Super
Intelligence
Artificial General
Intelligence AI represents
intelligence that
Current surpasses human
Breakthroughs AI is capable capabilities
Artificial Narrow of performing
Intelligence multiple tasks at a
WE ARE human level
HERE
AI is designed to
handle simple,
single-task activities
with high efficiency
As AI becomes more advanced and integrated into critical sectors such as finance, its ethical
implications become increasingly pressing. The roundtable also highlighted several long-term
ethical risks that will need to be addressed as AI technology continues to evolve.
25
Future of AI Governance –
Preparing for Advanced AI
Autonomy and Accountability
The discussion emphasised the growing autonomy of AI systems and the need for human
oversight, especially in sectors like finance and healthcare. As AI takes on more decision-
making, speakers highlighted the increasing risk of ethical decisions being left to machines.
The need for clear accountability frameworks was stressed, ensuring humans remain
responsible for AI-driven decisions. Without such frameworks, assigning liability for errors or
unethical outcomes from autonomous AI will be difficult.
“The number of
machine-to-machine
financial transactions
will increase significantly
if we have something
like AGI and AI.”
Speakers emphasised the critical need for international cooperation to establish consistent
ethical standards for AI governance. While much of the current regulatory activity occurs at the
national government and regulator level, there is significant global momentum to harmonise
these efforts. Initiatives like the Bletchley Declaration, the World Bank’s AI governance
programme, and standards from organisation such as the National Institute of Standards
and Technology (NIST), with its AI 100-5, A Plan for Global Engagement on AI Standards, and
the International Organisation for Standardisation (ISO), through its AI management system
standard ISO/IEC 42001, are shaping the global dialogue on AI ethics. These global guidelines
and initiatives are essential for promoting fairness, transparency, and accountability, particularly
in industries like financial services, where AI’s impact crosses borders.
While global standards often align with national efforts, there can be discrepancies where
international guidelines do not immediately match specific national views or priorities. However,
initiatives such as the G7 Hiroshima Process and the Transatlantic Trade and Technology Council
underscore the importance of cross-border collaboration to develop a unified framework. These
collective efforts are crucial for preventing regulatory arbitrage, ensuring responsible innovation,
and fostering a globally consistent approach to the ethical use of AI.
26
Future of AI Governance –
Preparing for Advanced AI
Scaling AI Infrastructure
As AI systems advance, scaling infrastructure, particularly in sectors like finance and healthcare,
is crucial. Speakers agreed that the growing complexity of AI models requires
significant investments in computational resources, data storage, and network capabilities.
Smaller markets, like Saudi Arabia, face unique challenges in keeping pace with global
developments. Regulators will need the capacity to oversee these complex systems, with some
speakers raising concerns about local capabilities compared to larger markets.
Concentration of Power
The growing reliance on a few major tech companies for AI services, such as cloud computing,
was another key concern. This concentration of power could pose systemic risks, especially in
smaller markets where dependence on external providers increases vulnerability. One speaker
noted, “Big tech companies offer AI capabilities that SMEs can’t access,” highlighting the
financial stability risks if these providers face disruptions. To mitigate this, some emphasised
the need to build local AI capabilities and reduce reliance on multinational firms.
A significant concern was the potential for AI, particularly generative AI, to shape opinions in
sectors like finance and insurance, driven by monopolistic organisations. This manipulation of
decision-making processes could become a major challenge as AI’s influence grows.
Speakers discussed the potential for Web 3.0 technologies to decentralise financial systems,
adding complexity and risk. However, the rise of data nationalism – countries enforcing stricter
data boundaries – could counterbalance these changes. Privacy-preserving technologies could
allow secure cross-border data sharing, though this is still developing. Concerns were also
raised about the significant energy consumption of AI technologies, with data centers expected
to account for two-thirds of energy resources by 2030 if unchecked.
27
Conclusion
Key themes that emerged throughout the discussions underline the importance of cross-sector
collaboration, continuous learning, and diversity in building the workforce of the future. The rise
of hybrid skills, where technical proficiency is complemented by soft skills like critical thinking,
adaptability, and leadership, is paramount to ensuring employees are equipped to navigate
complex challenges in an AI-driven world.
In conclusion, building the workforce of tomorrow requires a proactive and holistic approach.
Organisations that invest in reskilling and upskilling, embrace diversity, and cultivate strong
leadership will not only thrive in the evolving digital landscape but will also ensure that their
workforce remains agile, resilient, and prepared for the future. This report lays the foundation
for ongoing dialogue and action, urging leaders and stakeholders to prioritise workforce
transformation as a strategic imperative.
28
Appendix
29
Researchers
30
Centre for Finance,
Technology and
Entrepreneurship
Founded in 2017 in London, CFTE is a global platform
for education in Fintech and the future of Financial
More than 100,000 professionals from 100+ countries In total, more than 200 CFTE experts provide a
have participated in CFTE programmes to accelerate global view of what’s really happening in this new
their careers in Fintech and new finance. In addition world of finance.
to London, CFTE is present in Singapore (accredited
by Institute of Banking and Finance), Abu Dhabi “In a tech world, we bet on people” is CFTE’s
(Abu Dhabi Global Market Academy), Hong Kong motto. Our global community is the core of
(Cyberport), Malaysia (Asian Banking School), CFTE. Thanks to an innovative and open mindset,
Luxembourg (Luxembourg Academy of Digital CFTE alumni progress in their careers and help
Finance with LHOFT) and Budapest (Budapest others do the same, with notable alumni leading
Institute of Banking). transformation in their organisations. They also
attend events and share advice, tips and job
CFTE’s objective is to equip professionals and opportunities. CFTE alumni have also made an
students with the skills to thrive in the new impact through the world’s largest Global Fintech
world of finance. This includes online courses and Internship by mentoring over 1,000 students from
specialisations, leadership training and hands- all over the world.
on extrapreneurship experiences in topics such
as Fintech, Open Banking, Digital Payments and CFTE believes that the new world of finance will
Artificial Intelligence. be inclusive, diverse, innovative and will have a
positive impact on society and people. This starts
CFTE courses are designed with the principle of with people having the right knowledge and
For the industry, By the Industry. Our courses are mindset so that no one is left behind. Whether
taught by senior leaders from fast-growing Fintech you want to learn, contribute or more generally be
companies such as Revolut, Plaid, and Starling Bank, part of the new world of Financial Services, we are
innovative financial institutions such as Citi, DBS and looking forward to welcoming you.
Ping An, tech companies such as Google, IBM and
Uber and regulators from MAS, ECB and MNB.
Contact Website
Research team Courses
[email protected] courses.cfte.education
Press Articles
[email protected] blog.cfte.education
31
About
Fintech Saudi
Fintech Saudi is an initiative launched by the Saudi Central Bank
(SAMA) in collaboration with the Capital Markets Authority (CMA) under the Financial Sector
Development Programme to support the development of the Fintech Industry in Saudi Arabia.
Fintech Saudi’s ambition is to transform Saudi Arabia into an innovative fintech hub with a
thriving and responsible fintech Ecosystem.
Fintech Saudi seeks to achieve this by supporting the development of the infrastructure
required for the growth of the fintech industry, building capabilities and talent required by
fintech companies and supporting fintech entrepreneurs at every stage of their
Signature
Activities
Initiatives
Content
32
More from CFTE
Piotr Kurzepa
[email protected]
[email protected]
Head of Middle East Office
33