L8 AI Governance
L8 AI Governance
L8 IT/AI Governance
Sem 1, AY2024/2025
Yang Lu
Learning Objectives
▪ Discuss IT/AI governance frameworks.
1
Governance
▪ Governance is
▪ “The set of responsibilities and practices exercised by the board and executive
management with the goal of providing strategic direction, ensuring that objectives
are achieved, ascertaining that risks are managed appropriately, and verifying that
the enterprise’s resources are used responsibly”
▪ GRC
▪ Governance, Risk Management, and Compliance
2
MAS Technology
Governance
Requirement
▪ 3 Technology Risk Governance
and Oversight
3
Discussion
Governance
4
Discussion: SingHealth Data
Breach
▪ Enhancements to Governance and Organisational Structures
“At the Ministry, the MOH Chief Information Security Officer (CISO) is currently also the
Director of Cyber Security Governance at IHiS. We will separate these roles. The MOH CISO
will be supported by a dedicated office in MOH and report to the Permanent Secretary. The
MOH CISO office will be the cybersecurity sector lead for the healthcare sector. It will
coordinate efforts to protect Critical Information Infrastructure in the healthcare sector, and
ensure that the sector fulfils its regulatory obligations under the Cybersecurity Act. For its
part, IHiS will have its own separate Director of Cyber Security Governance. ”
Source: https://www.moh.gov.sg/news-highlights/details/ministerial-statement-on-the-committee-of-inquiry-into-the-cyber-attack-on-singhealth-s-
it-system
5
AI Governance
▪ AI governance is the ability to direct, manage and monitor the AI activities of
an organization.
▪ Objective
▪ Deliver transparent and ethical AI to establish accountability, responsibility and
oversight
6
SG AI Governance
▪ Singapore AI governance framework
▪ IMDA and PDPC
▪ 11 AI ethical principals
7
SG AI
Governance
(cont.)
8
AI Governance
▪ Explainable
▪ Explain what the AI system is doing
▪ Transparent
▪ Right to be informed
▪ Appropriate info is provided to individuals impacted by AI systems
▪ Fairness:
▪ a. Ensure that algorithmic decisions do not create discriminatory or unjust impacts
across different demographic lines (e.g., race, sex, etc.).
▪ b. To develop and include monitoring and accounting mechanisms to avoid
unintentional discrimination when implementing decision-making systems.
▪ c. To consult a diversity of voices and demographics when developing systems,
applications and algorithms.
9
Discussion
▪ JD.com
▪ AI bot handling debts collection
▪ https://yanxi.jd.com/
10
AI Governance (cont.)
▪ Dimensions:
1. Internal Governance Structures and Measures
▪ Clear roles and responsibilities for the ethical deployment of AI
▪ E.g., a governance board including CTO, CDO, CPO, etc
▪ E.g.,
▪ MasterCard AI governance council: chaired by its Executive Vice President of the AI
Center of Excellence; members include the Chief Data Officer, Chief Privacy Officer,
CISO, data scientists and representatives from business teams.
▪ Risk management and internal controls
▪ E.g., formal risk impact assessment towards AI projects.
▪ E.g.,
▪ MasterCard AI project risk scoring: initial risk scoring to determine the risk of the
proposed AI activity, which includes an evaluation of multiple factors including
alignment with corporate initiatives, the data types and sources utilized, and the
impact on individuals from AI decisions.
▪ Periodically risk impact assessment review
11
AI Governance (cont.)
▪ Dimensions:
2. Determine the level of human involvement in AI-
augmented decision making
▪ Human-in-the-loop
▪ Human retaining full control and the AI only
providing recommendations or input.
▪ Decisions cannot be exercised without affirmative
actions by the human
▪ Human-out-of-the-loop
▪ There is no human oversight over the execution of
decisions.
▪ The AI system has full control without the option of
human override.
▪ Human-over/on-the loop
▪ Human oversight is involved to the extent that the
human is in a monitoring or supervisory role, with
the ability to take over control when the AI model
encounters unexpected or undesirable events (such
as model failure).
12
Discussion
▪ Which of the following AI-augmented decision making could be with human-
out-of-loop design? (Please select all the options that apply)
a. AI-based disease diagnosis in healthcare
b. AI-based content recommendation systems in Netflix
c. AI-based high speed financial trading
d. AI-based Autonomous car driving
13
Discussion: Binance Incident
Source:
https://www.straitstimes.com/business/ban
king/bitcoin-briefly-crashes-87-on-binances-
us-exchange-due-to-algorithm-bug
14
Discussion: New York Stock
Exchange Incident
15
AI Governance (cont.)
▪ Automation Paradox:
▪ The more efficient the automated
system, the more crucial the human
contribution of the operators.
▪ Humans are less involved, but their
involvement becomes more critical.
16
AI Governance (cont.)
▪ Dimensions:
3. Operations Management
▪ Data preparation
▪ Good data accountability practices
a. Understanding the lineage of data
▪ knowing where the data originally came from, how it was collected, curated and moved
within the organization, and how its accuracy is maintained over time
b. Ensuring data quality
▪ E.g., data accuracy/completeness/veracity/recency/relevance/integrity/usability
c. Minimizing inherent bias
▪ Selection bias: the data are not fully representative of the actual data or environment
the model may receive or function in.
▪ Measurement bias: when the data collection device/method causes the data to be
systematically skewed in a particular direction.
d. Different datasets for training, testing, and validation
e. Periodic reviewing and updating of dataset
17
Discussion
Source: https://www.theguardian.com/us-
news/2023/feb/08/us-immigration-cbp-one-
app-facial-recognition-bias
18
Discussion
Source:
https://www.science.org/doi/10.1126/scienc
e.aax2342
19
Discussion
Source: https://www.reuters.com/article/us-amazon-com-jobs-
automation-insight-idUSKCN1MK08G
20
AI Governance (cont.)
▪ Dimensions:
3. Operations Management
▪ Algorithms
▪ Explainability
▪ Explain how deployed AI models’ algorithms function and/or how the decision-making
process incorporates model predictions.
▪ Repeatability/Reproducibility
▪ The ability to consistently perform an action or make a decision, given the same scenario
▪ Robustness
▪ The ability of a computer system to cope with errors during execution and erroneous input
▪ Regular tuning
▪ Refresh models based on updated training datasets that incorporate new input data.
▪ Traceability
▪ Auditability
▪ Chose Model
21
Discussion
▪ An example of a white-box
adversarial example designed to
generate physical alterations for
physical-world objects. The
adversarial alteration (b), which
is designed to mimic the
appearance of graffiti (a), tricks
an image classifier into not
seeing a stop sign.
▪ Source: Kevin Eykholt, Ivan Evtimov, Earlence Fernandes,
Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash,
Tadayoshi Kohno, Dawn Song, "Robust Physical-World
Attacks on Deep Learning Visual Classification,” 2018
IEEE/CVF Conference on Computer Vision and Pattern
Recognition, Salt Lake City, UT, USA, June 18-23, 2018,
pp. 1625-1634, https://arxiv.org/abs/1707.08945.
22
AI Governance (cont.)
▪ Dimensions:
4. Stakeholder interaction and communication
▪ General disclosure
▪ Organizations are encouraged to provide general information on whether AI is used in their
products and/or services. And how the AI decision may affect an individual consumer, and
whether the decision is reversible.
▪ Option to opt-out
▪ Relevant considerations include:
▪ Degree of risk/harm to the individuals;
▪ Reversibility of the decision made;
▪ Availability of alternative decision-making mechanisms;
▪ Cost or trade-offs of alternative mechanisms;
▪ Complexity and inefficiency of maintaining parallel systems;
▪ Technical feasibility
▪ Communication channels
▪ Acceptable user policies (AUPs)
▪ etc
23
Discussion
▪ HSBC: AI Governance in All Facets of Loan Applications
▪ To enable optimal benefits to its customers, while keeping them protected from
potential harm that could come with the unmanned use of emerging technologies,
HSBC put in place measures to govern the use of AI in its businesses. E.g.,
▪ Established an internal Global Model Oversight Committee (GMOC).
▪ Chaired by its Chief Risk Officer, the committee comprises representatives from relevant
departments with defined roles in the development of accountable AI processes for HSBC
including:
▪ The Chief data office
▪ Head of various sub-committees that represent different regions (e.g., US and Asia),
businesses (e.g., Retail Banking and Wealth Management) and functions (e.g., Risk
Management and Finance);
▪ The Model Risk Management (MRM) team, which consists of a Model Risk Governance
sub-team that sets the policy and standards and an Independent Model Review sub-
team that is responsible for validating the AI models before deployment.
24
Discussion
▪ HSBC: AI Governance in All Facets of Loan Applications
▪ Developed the Model Risk Management (MRM) framework
▪ Roles and responsibilities: model owner, model developer, model sponsor etc.
▪ Mandatory training
▪ Keeping to principles
▪ Vendor management controls implementations
▪ Complying with rigorous standards
▪ Auditability
▪ Managing risks for loan applications
▪ Keeping customers in the loop
25
Discussion
▪ Standard Chartered Responsible Artificial Intelligence (RAI)
▪ Inhouse Framework Responsible for Intelligence Data and Algorithm Yield (FRIDAY)
▪ Own RAI council
▪ 5 Main engines
▪ Explainable AI
▪ Data Suitability
▪ Robustness
▪ Model monitoring and stability
▪ Model review and validation
26
▪ Regarding the new credit card
application process at HSBC, how
should human be involved?
▪ Human out of the loop
Discussion ▪ Human over the loop
▪ Human in the loop
▪ None of the above
27
AI Governance (cont.)
▪ Monetary Authority of Singapore
▪ Principles to Promote Fairness, Ethics, Accountability, and Transparency (FEAT) in the Use of
Artificial Intelligence and Data Analytics (AIDA) in Singapore’s Financial Sector
▪ Fairness
▪ Justifiability
▪ Accuracy and bias
▪ Ethics
▪ Accountability
▪ Internal accountability
▪ Concerned with the AIDA firm’s internal governance
▪ External accountability
▪ concerned with the AIDA firm’s responsibility to data subjects.
▪ Data subjects are provided with channels to enquire about, submit appeals for and request
reviews of AIDA-driven decisions that affect them.
▪ Transparency
▪ Data subjects are provided, upon request, clear explanations on what data is used to make AIDA-
driven decisions about the data subject, how the data affects the decision, and the consequences that
AIDA-driven decisions may have on them.
28
AI Governance (cont.)
▪ A.I. Verify – An AI Governance Testing Framework and Toolkit @
Singapore
▪ The world’s first AI Governance Testing Framework and Toolkit for companies that
wish to demonstrate responsible AI in an objective and verifiable manner
▪ 25 May, 2022
▪ Aims to promote transparency between companies and their stakeholders.
▪ Currently a Minimum Viable Product (MVP)
▪ It translates AI ethical principles into tangible results.
▪ Starts with 8 principles
▪ It helps AI system-owners and/or developers self assess the performance of their AI
solutions through a mix of technical/statistical tests and process checks.
▪ Verifies claimed performance of AI systems
▪ Works with supervised AI models
▪ Does not define ethical standards (i.e., no pass or fail)
29
AI Governance (cont.)
▪ A.I. Verify
▪ Technical/process test
▪ Explainability
▪ Robustness
▪ Fairness
▪ Process checks
▪ Transparency
▪ Repeatability/Reproducibility
▪ Safety
▪ Accountability
▪ Human agency and oversight
30
AI Governance (cont.)
▪ A.I. Verify
▪ Scope and limitations of MVP
a. Supports supervised learning AI models on tabular and image for:
▪ Binary classification
▪ Multiclass classification
▪ Regression
b. Cannot test Generative AI/LLMs
c. Does not define AI ethical standards
d. Does not guarantee that any AI system tested will be free from risks or biases or is
completely safe
e. Support small-to-medium scale models
31
Model AI Governance
Framework for
Generative AI
▪ AI Verify
▪ 16 Jan 2024
32
EU AI Governance
▪ The EU AI Act
▪ The AI Act is a proposed European law on artificial intelligence (AI) – the first law on
AI by a major regulator anywhere.
▪ First proposed in April 2021
▪ Formally adopted by the European Parliament on 13 Mar, 2024
▪ A large majority of 523-46 votes in favor of the legislation
▪ The world's first horizontal and standalone law governing AI
▪ https://artificialintelligenceact.eu/ai-act-explorer/
33
EU AI Governance (cont.)
▪ The EU AI Act
▪ Three risk categories
1. AI applications and systems that create an unacceptable risk, are banned. They include:
▪ Cognitive behavioural manipulation of people or specific vulnerable groups: e.g., voice-
activated toys that encourage dangerous behaviour in children
▪ Social scoring: classifying people based on behaviour, socio-economic status or personal
characteristics
▪ Real-time and remote biometric identification systems, such as facial recognition
34
EU AI Governance (cont.)
▪ The EU AI Act
▪ Three risk categories
2. High-risk application, are subject to specific legal requirements.
▪ AI systems that negatively affect safety or fundamental rights will be considered high risk and will be
divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This includes
toys, aviation, cars, medical devices and lifts.
2) AI systems falling into eight specific areas that will have to be registered in an EU database:
▪ Biometric identification and categorisation of natural persons
▪ Management and operation of critical infrastructure
▪ Education and vocational training
▪ Employment, worker management and access to self-employment
▪ Access to and enjoyment of essential private services and public services and benefits
▪ Law enforcement
▪ Migration, asylum and border control management
▪ Assistance in legal interpretation and application of the law.
▪ All high-risk AI systems will be assessed before being put on the market and also throughout their
lifecycle.
3. Applications not explicitly banned or listed as high-risk are largely left unregulated.
35
AI Governance (cont.)
36
Discussion
▪ AI at LinkedIn
▪ https://engineering.linkedin.co
m/blog/2018/10/an-
introduction-to-ai-at-linkedin
37
Discussion
▪ https://www.technologyreview.c
om/2021/06/23/1026825/linke
din-ai-bias-ziprecruiter-monster-
artificial-intelligence/
38
US AI Governance
▪ US
▪ The National AI initiative Act of 2020 (NAIIA)
▪ Effective 1 Jan, 2021
▪ Providing for a coordinated program across the entire Federal government to accelerate AI
research and application for the Nation’s economic prosperity and national security.
▪ Purpose
▪ Advance U.S. leadership in AI R&D
▪ Leading world in development and use of trustworthy AI systems in public and private
sectors
▪ Preparing present and future U.S. workforce for integration of AI systems across all
sectors of economy and society;
▪ Coordinating AI research, development, and demonstration activities among civilian
agencies, Department of Defense, and Intelligence Community to ensure that each
informs work of the others.
39
AI Governance (cont.)
▪ US
▪ NIST AI risk management framework (AI RMF 1.0)
▪ 26 Jan 2023
▪ https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf
▪ Principles
▪ Valid and Reliable
▪ Safe
▪ Secure and Resilient
▪ Accountable and Transparent
▪ Explainable and Interpretable
▪ Privacy-Enhanced
▪ Fair – with Harmful Bias is Managed
40
California New AI Safety Bill
▪ The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,
or SB (Senate Bill) 1047
▪ A 2024 California bill intended to "mitigate the risk of catastrophic harms from AI
models so advanced that they are not yet known to exist”
▪ https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
▪ Key highlights:
1. Expansive definitions of “covered models” and “covered model derivatives” are likely to
capture many frontier AI models and subsequent modifications. SB 1047 broadly applies to
“covered models,” which are AI models that either:
▪ Cost over $100 million to develop and are trained using computing power “greater than
10^26 integer or floating-point operations” (FLOPs); or
▪ Are based on covered models and fine-tuned at a cost of over $10 million and using
computing power of three times 10^25 integer or FLOPs.
41
California New AI Safety Bill (cont.)
▪ The Safe and Secure Innovation for Frontier Artificial Intelligence Models Act,
or SB 1047
▪ Key highlights:
2. Apply only to companies that develop or provide compute power to train covered models or
covered model derivatives, not to companies that merely use covered models.
3. Before training a covered model, developers are required to implement technical and
organization controls designed to prevent covered models from causing “critical harms.”
▪ These critical harms include creating or using certain weapons of mass destruction to
cause mass casualties; causing mass casualties or at least $500 million in damages by
conducting cyberattacks on critical infrastructure or acting with only limited human
oversight and causing death, bodily injury, or property damage in a manner that would
be a crime if committed by a human; and other comparable harms.
▪ Kill switch or “shutdown capabilities.”
▪ Cybersecurity protections.
▪ Safety protocols.
42
▪ If you were the governor of California
Discussion state, will you sign and approve this
drafted bill?
43
AI Governance (cont.)
▪ International standard
▪ ISO/IEC JTC
▪ A joint technical committee of the International Organization for Standardization and
International Electrotechnical Commission.
▪ ISO/IEC JTC 1/SC 42 Artificial intelligence
▪ https://www.iso.org/committee/6794475.html
44
Recommended Reading
▪ Blackman and Ammanath. (2022), Building Transparency into AI Projects.
Harvard Business Review, June.
▪ Singapore AI Governance Framework Second Edition
▪ AI Verify: Proposed Model Governance Framework for Generative AI
45
Next week
▪ L9 IT risks and security
46