0% found this document useful (0 votes)
152 views

Short Form AIGP Study Outline

Uploaded by

rossmike950
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views

Short Form AIGP Study Outline

Uploaded by

rossmike950
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 28

AIGP Study Outline

Domain 1: Understanding the Foundations of Artificial Intelligence (12/100)

I.A. Understand the Basic Elements of AI and ML (4/100)


● Common elements of AIML definitions: (1) technology:
○ AI = “a machine-based system that for a given set of human-defined objectives, make
predictions, recommendations or decisions influencing real or virtual environments. AI
systems use machine and human-based inputs to perceive real/virtual environments,
abstract such perceptions into models through automated analysis and use model
inference to formulate options for info/action.” (NIST)
○ ML = branch of AI that focuses on using data and algorithms to enable AI to imitate the
way that humans learn, gradually improving its accuracy.” Enabling computers to
iteratively learn then make inferences, decisions and predictions based on input data.
Programmed from training data to perform a task without being explicitly programmed to.
■ Adversarial ML = technique that raises safety/security risk to the model and can
be seen as an attack.
■ Bootstrap aggregating = ML method that takes multiple smaller sets of a model to
train on random subsets of a dataset/corpus. Enhances stability of a model.
■ Discriminative model = ML method that directly maps input features to class
labels for patterns that can help distinguish between classes. Used for text
classification tasks.
■ Entropy = measure of randomness in a dataset (higher entropy = more difficult to
predict)
■ Generalization = ability of model to apply what it’s learned to new / unseen data
■ Greedy algorithm = optimizes for immediate short term action rather than long
term optimal solution
■ Inference = ML process where trained model make predictions or decision based
on input data
■ Input data = data provided to / acquired by learning algo or ML model for purpose
of producing output.
■ ML model = learned representation of patterns in data, created by applying AI
algorithm to training dataset. Can be used to take on unseen data.
■ Overfitting = model becomes too specific to dataset and cannot generalize
● Underfit = model fails to capture complexity of training data. May result in
inaccuracy or poor predictive ability. Common reasons = too few
parameters, too high regularization rate, inappropriate/insufficient feature
set.
■ Parameter = internal variables that algo model learns from training data. Values
that model adjusts to during training process to make predictions on new data.
■ Weight = how significant a parameter is considered
■ Testing data = subset of dataset used to evaluate trained model. Used to test
performance. (training data = used to train the model)
■ Transfer learning model = algo learns one task and then uses learned knowledge
to learn something else.
■ Turing test = machine ability to exhibit behavior indistinguishable from human
■ Generative adversarial network = deep learning architecture where two models
compete against each other to be more accurate in their predictions
○ Knowledge-based systems = form of AI design to capture human expert knowledge to
support decision-making.
■ Expert system mimics human expert decision making by drawing inferences
from a knowledge base (e.g., using corpus of medical journals to make a
diagnosis)
● (2) automation (elements of varying levels);
○ Automation = any technology that reduces human labor esp for predictable/routine
tasks. Automation sets up machines to follow human commands. AI sets up machines to
mimic humans and think for themselves.
● (3) role of humans (define objectives or provide data);
○ Define objectives, decide relevant data, train the AI to perform objective.
○ AI can enhance human work quality / more efficient
○ Decide guardrails, safeguards, appropriate outcomes.
● (4) output (content, predictions, recommendations, or decisions).
○ Predictive AI - offers predictions, can be used to inform recommendations or decisions
○ Generative AI - generates new content, such as text, images, and video

Parameters Generative AI Predictive AI

Objective / Generates new, original content or data Predicts and analyzes existing patterns
Function or outcomes

Training data Diverse and comprehensive Historical data for learning/prediction

Examples Text generation, image synthesis Forecasting, classification, regression

Learning Learns patterns and relationships in Learns from historical data to make
process data predictions

Use cases Creative tasks, content creation Business analytics, financial


forecasting

Challenges May lack specificity in output Limited to existing patterns, may miss
novel scenarios

Training Generally more complex and Requires less complex training


complexity resource-intensive compared to generative models

Different Generative AI uses complex algorithms Predictive AI generally relies on


algorithms and deep learning to generate new statistical algorithms and machine
content based on the data it is trained learning to analyze data and make
on predictions

● Understand what it means that an AI system is a socio-technical system.


○ Socio-technical system = type of system in which both social and technical elements
are intertwined with each other. AI systems are socio-technical as they are not just
technical tools but also have a social impact.
● Knowledge of the OECD framework for the classification of AI systems.
○ Goals:
■ Promote a common understanding of AI: Identify features of AI systems that
matter most, to help governments and others tailor policies to specific AI
applications and help identify or develop metrics to assess more subjective
criteria (such as well-being impact).
■ Inform registries or inventories: Help describe systems and their basic
characteristics in inventories or registries of algorithms or automated decision
systems.
■ Support sector-specific frameworks: Provide the basis for more detailed
application or domain-specific catalogs of criteria, in sectors such as healthcare
or in finance.
■ Support risk assessment: Provide the basis for related work to develop a risk
assessment framework to help with de-risking and mitigation and to develop a
common framework for reporting about AI incidents that facilitates global
consistency and interoperability in incident reporting.
■ Support risk management: Help inform related work on mitigation, compliance
and enforcement along the AI system lifecycle, including as it pertains to
corporate governance.
○ Classification framework for AI systems:
■ People & Planet: potential for human-centric, trustworthy AI that benefits people
and planet. Core characteristics include users and impacted stakeholders, as
well as the impact on human rights, the environment, well-being, society and
work.
■ Economic Context: economic/sectoral environment where AI is deployed.
Characteristics include vertical, business function, business model, criticality,
deployment, impact and scale, technical maturity.
■ Data & Input- This describes the data and/or expert input with which an AI model
builds a representation of the environment. Characteristics include the
provenance of data and inputs, machine and/or human collection method, data
structure and format, and data properties. Data & Input characteristics can
pertain to data used to train an AI system (“in the lab”) and data used in
production (“in the field”).
■ AI Model- This is a computational representation of all or part of the external
environment of an AI system – encompassing, for example, processes, objects,
ideas, people and/or interactions that take place in that environment. Core
characteristics include technical type, how the model is built (using expert
knowledge, machine learning or both) and how the model is used (for what
objectives and using what performance measures).
■ Task & Output- This refers to the tasks the system performs, e.g.
personalisation, recognition,forecasting or goal-driven optimisation; its outputs;
and the resulting action(s) that influence the overall context. Characteristics of
this dimension include system task(s); action autonomy; systems that combine
tasks and actions like autonomous vehicles; core application areas like computer
vision; and evaluation methods.
○ Principles:
■ Inclusive growth, sustainable development/well-being
■ Human rights and domestic values, incl. Fairness and privacy
■ Transparency and explainability
■ Robustness, security, safety
■ Accountability
○ Recommendations for policymakers
■ Investing in AI R&D
■ Fostering an inclusive AI enabling ecosystem
■ Shaping an enable inter-operable governance and policy environment for AI
■ Building human capacity and preparing for labor market transition
■ International co-operation for trustworthy AI
● Use cases and benefits of AI (recognition, event detection, forecasting, personalization,
interaction support, goal-driven optimization, recommendation).
○ GenAI
■ Augment creative (art, design, music, product, etc) work as well as routine
professional work. Newer version of AI compared to predictive AI
● Text: Generate credible text on various topics. It can compose business
letters, provide rough drafts of articles, and compose annual reports.
● Images: Output realistic images from text prompts, create new scenes,
and simulate a new painting.
● Video: Compile video content from text automatically and put together
short videos using existing images.
● Music: Compile new musical content by analyzing a music catalog and
rendering a new composition.
● Product design: Can be fed inputs from previous versions of a product
and produce several possible changes that can be considered in a new
version.
● Personalization: Personalize experiences for users such as product
recommendations, tailored experiences, and new material that closely
matches their preferences.”
○ Predictive AI
■ Financial services: Enhances financial forecasts. By pulling data from a wider
data set and correlating financial information with other forward-looking business
data, forecasting accuracy can be greatly improved.
■ Fraud detection: Spot potential fraud by sensing anomalous behavior. In
banking and e-commerce, there might be an unusual device, location, or request
that doesn’t fit with the normal behavior of a specific user. A login from a
suspicious IP address, for example, is an obvious red flag.
■ Healthcare: Find use cases such as predicting disease outbreaks, identifying
higher-risk patients, and spotting the most successful treatments.
■ Marketing: More closely define the most appropriate channels and messages to
use in marketing.

I.B. Understand the differences among types of AI systems (4/100)


● Understand the differences between strong/broad and weak/narrow AI.

Category Weak AI Strong AI / AGI

Domain Singular, limited multi-domain

Human thought processes Mimics Comprehensive intellect

Rules Adheres, constrained by Beyond

Creativity No consciousness Creativity, common sense, logic


Data Convert data to useful info by Understand goals, motivations,
identifying patterns/generating standards, cognitive processes.
predictions.

Real life examples Alexa, Siri, Google Assistant, none


ChatGPT

● Machine learning and its training methods


○ Supervised = work with pre-labeled data with known/desired outputs (predictors and
targets, independent and dependent variables) (e.g., classification, regression).
■ Classification algorithms = decide the category of an entity, object or event as
represented in the data. The simplest classification algorithms answer binary
questions such as yes/no, sales/not-sales or cat/not-cat. More complicated
algorithms lump things into multiple categories like cat, dog or mouse. Popular
classification algorithms include decision trees, logistic regression,
random forest and support vector machines.
■ Regression algorithms = identify relationships within multiple variables
represented in a data set. This approach is useful when analyzing how a specific
variable such as product sales correlates with changing variables like price,
temperature, day of week or shelf location. Popular regression algorithms
include linear regression, multivariate regression, decision tree and least
absolute shrinkage and selection operator (lasso) regression.
■ Exploratory data analysis - pre-training data discovery process to gain
preliminary insights eg identifying patterns and outliers
■ Decision tree = type of supervised learning used in ML that represents decisions
/ consequences
● Random forest = build multiple trees then merge together to get more
accurate / stable prediction. Each tree built with random subset of
training data (eg bootstrap aggregation). Helpful for datasets with
missing values or very complex sets.
○ Unsupervised = Automates the process of finding patterns in a dataset (e.g., clustering,
dimensional reduction).
■ Clustering algorithms = help group similar sets of data together based on
various criteria. Practitioners can segment data into different groups to identify
patterns within each group.
■ Dimension reduction algorithms = explore ways to compact multiple variables
efficiently for a specific problem.
○ Semi-supervised = characterize processes that use unsupervised learning algorithms to
automatically generate labels for data that can be consumed by supervised techniques.
○ Reinforcement = used to improve models after they’ve been deployed.The most
common reinforcement learning algorithms use various neural networks. Trains models to
optimize actions within given environment to achieve certain goal (eg score in video
game)
● Understand deep learning, generative AI, multi-modal models, transformer models, and the major
providers.
○ Deep learning = subset of ML (which is subset of AI) that imitates the way we humans
gain certain types of knowledge (artificial neural networks). Deep learning models can be
taught to perform classification tasks and recognize patterns in photos, text, audio and
other various data. It is also used to automate tasks that would normally need human
intelligence, such as describing images or transcribing audio files.
■ Enables a computer to learn by example (e.g., GenAI)
■ Can be used for digital assistants, fraud detection, and facial recognition
■ Is able to create accurate predictive models from large amounts of unlabeled,
unstructured data
● Multi-modal models = subset of deep learning (along with GenAI) that takes data from
multiple “modalities” (e.g., text, images, video, audio, sensor data) to fuse/analyze then
create a more complete representation and better performance on tasks (better
predictions, capturing patterns not visible by uni-modal models).
○ Under the hood: Unimodal encoders encode individual modalities. Usually, one
for each input modality. A fusion network that combines the features extracted
from each input modality, during the encoding phase. A classifier that accepts the
fused data and makes predictions.
● Active learning = subset of AIML where algo can choose some of the data it learns from.
Model requests additional data points that will help it learn from the best.
● Transformer models = subset of deep learning used for natural language processing
(NLP - AI subfield that helps computers understand language by transforming info into
content). These models can translate text and speech in near-real-time. Transformer
models work by processing input data, which can be sequences of tokens or other
structured data, through a series of layers that contain self-attention mechanisms and
feed-forward neural networks (ML model mimicking how the brain interacts with multiple
processing layers incl hidden layers, allowing complex nonlinear pattern recognition -
image recognition, medical diagnosis). Transformer models are trained using supervised
learning, where they learn to minimize a loss function that quantifies the difference
between the model's predictions and the ground truth for the given task.
○ LLM - type of NLP model. Pre-trained on massive text datasets for language
learning (natual language processing - subfield of AI that helps computers
understand language). Two types - generative (predictions) and discriminative
(classifications)
○ Text as input and output - input = block of text (converted from speech or
written) and output is some desired characteristic (e.g., search query, command,
review), meaning and tone.
● Understand the difference between robotics and robotic processing automation (RPA).
○ Robotics = multidisciplinary field wrt design, construction, operation and programming of
robots. Robots allow AI systems and software to interact with the physical world.
specifically relates to machines that can see, sense, actuate and, with varying degrees of
autonomy, make decisions.
○ Robotic processing automation = software robot that mimics human actions,
whereas artificial intelligence is the simulation of human intelligence using computer
software.

I.C. Understand the AI technology stack (2/100)


● Layered approach allows for modularity, scalability and easy troubleshooting, cost reduction,
efficient development, tailored outputs vs monolithic.
● Architecture components
○ Infra
■ data ingestion
■ data storage
■ data processing (monitoring and managing workloads)
■ Computing platform (e.g., hadoop, GPUs, TPUs, accelerator chips). May be
cloud based and scalable.
○ ML models (e.g., GANs, transformers)
■ machine learning algorithms
■ ML libraries, frameworks (pytorch, tensorflow etc)
■ Foundation Models - large-scale pretrained model with AI capabilities. cognitive
layer enabling complex decisionmaking and reasoning. Base for dedicated
models. Trained on extensive and diverse datasets.
● Models, data provenance and underlying code can be closed-source
(faster, cloud) or open-source (transparency, security).
○ Applications
■ APIs
● Programming languages, deployment tools (e.g., langchain, fixie,
semantic kernel, vertex AI) - equip engineers to build apps for AI
■ user interfaces.

I.D. Understand the history of AI and the evolution of data science (2/100)

● 1956 Dartmouth summer research project on AI


○ Birth of AI as a field of research. Conjecture that every type of learning can be “so
precisely described that a machine can be made to simulate it.”
● Summers, winters and key milestones.
○ 1956–1974: THE GOLDEN YEARS During the Golden Years of AI, the programs –
including computers solving algebra word problems and learning to speak English –
seem "astonishing" to most people.
○ 1974–1980: 20TH CENTURY AI WINTER The first AI winter occurs as the capabilities of
AI programs remain limited, mostly due to the lack of computing power at the time. They
can still only handle trivial versions of the problems they were supposed to solve.
○ 1987–1993: A RENEWED INTEREST The business community's fascination and
expectations of AI, particularly expert systems, rise. But they are quickly confronted by
the reality of their limitations.
● Understand how the current environment is fueled by exponential growth in computing
infrastructure and tech megatrends (cloud, mobile, social, IOT, PETs, blockchain, computer
vision, AR/VR, metaverse).
○ Increasing computing and storage capacities
○ Enormous growth in the amount of data available for learning and analysis.
○ The development of learning machines based on artificial and neural networks.

Domain 2: Understanding AI Impacts and Responsible AI Principles (10/100)

II.A. Understand the core risks and harms posed by AI systems (4/100)
● Understand the potential harms to an individual (civil rights, economic opportunity, safety)
○ Inaccuracy leading to human rights violations
■ Implicated for crimes they did not commit
■ Civil liberties curtailed by facial recognition from law enforcement. Excessive
surveillance.
■ Chilling effect on public discourse and activism
○ Lack of informed consent for decisionmaking and data collection
■ Unforeseen downstream use of already-shared data
■ Lack of awareness that collection is taking place.
○ Security - bad actors can take sensitive SPII (e.g., facial).
● Understand the potential harms to a group (discrimination towards sub-groups).
○ Disproportionate impact on people of color
■ Accuracy - unreliability of FBI records impacing POC job seekers.
■ Stereotypes built into AI
■ Disproportionate data collection (e.g., crime registries containing more POC)
● Understand the potential harms to society (democratic process, public trust in governmental
institutions, educational access, jobs redistribution).
○ Concentration of power in hands of a few (govts, large tech cos, developed countries)
○ Disinformation online
● Understand the potential harms to a company or institution (reputational, cultural, economic,
acceleration risks).
○ Sensitive Data Exposure: Unintended exposure of confidential information, including
customer and business data, posing risks of identity theft, financial fraud, and loss of
public trust. EO prioritizes privacy/security, indicating closer scrutiny.
○ Cybersecurity Vulnerabilities: Integration of AI with entities’ institutional platforms can
create entry points for hackers, risking not just data theft but also potential disruption of
operations, particularly supply chains.
○ Data Control Concerns: Relying on external AI solutions can lead to issues with data
control and governance, and can potentially expose companies to additional risks if
vendors do not meet ESG or cybersecurity standards.
○ Opaque Decision Processes: The complexity of AI algorithms, especially in deep
learning, often results in a lack of transparency and explainability, making it difficult for
stakeholders to understand how decisions are made. This “black box” nature of AI can
hinder accountability and trust in AI-driven ESG initiatives.
○ Accountability Challenges: In cases where AI-driven decisions lead to adverse ESG
outcomes, it can be difficult to attribute responsibility.
○ Compliance Complexity: Difficulty keeping up with fast-growing AI laws, regs and
standards, increasing the risk of inadvertent non-compliance.
○ Legal Uncertainties: Rapidly evolving AI technologies can outpace existing legal
frameworks, creating uncertainties about liability for collection, maintenance and use of
data, intellectual property rights, and other legal issues
● Understand the potential harms to an ecosystem (natural resources, environment, supply
chain). (source)
○ High Energy Consumption: The computation-intensive nature of training and running
AI, particularly large models, can lead to high energy consumption and significant carbon
footprints.
○ Life Cycle Impact of AI Hardware: HW lifecycle to manage AI (eg servers, data centers)
contribute to environmental concerns such as electronic waste and resource depletion.

II.B. Understand the characteristics of trustworthy AI systems (4/100)

● “Human Centric” AI Systems = AI systems that amplify/augment rather than displace human
abilities. Preserve human control that ensures AI meets our needs while operating transparently,
delivering equitable outcomes and respecting privacy.
● Accountable AI system characteristics = safe, secure/resilient, valid/reliable, fair.
○ Accountability: responsibility to ensure AI system is “ethical, fair, transparent and
compliant” and ensures the actions, decisions and outcomes of an AI system can be
traced back to the entity responsible for it.
● Transparent AI system = makes info available to stakeholders eg whether AI is used, how
model works (e.g., through model cards, system cards). Impt for explainability and accountability.
● Explainable AI system (XAI) = The ability to describe or provide sufficient information about how
an AI system generates a specific output or arrives at a decision in a specific context to a
predetermined addressee. Impt for transparency and trust.
● Privacy enhancing technologies (PET) = Tech approaches that allow for data collection /
processing / sharing while safeguarding privacy. They enable a relatively high level of utility from
data while minimizing the need for extensive data collection and processing.
○ Examples:
■ Homomorphic encryption (can compute without decrypting);
■ secure multi-party computation (SMPC; multiple parties to jointly compute while
keeping data secure from one another),
■ federated learning (ML that enables models across multiple decentralized
devices without data transfer);
■ Synthetic data (generates synthetic data with same stat properties and
correlations as real data but without PII. comes with risks).
■ differential privacy (add noise to make it hard to identify);
■ trusted execution environment (TEE; provide secure environment within
computer system where sensitive ops are executed away from main
process/memory).

II.C. Understand the similarities and differences among existing and emerging ethical guidance on AI
(2/100)
● FIPPS, ECHR and OECD principles.
○ Fair Information Practices (FIPPS) = collection of widely accepted principles that
agencies use when evaluating information systems, processes, programs, and activities
that affect individual privacy. Principles and not requirements.
■ Access and Amendment- Agencies should provide individuals with appropriate
access to PII and appropriate opportunity to correct or amend PII.
■ Accountability- Agencies should be accountable for complying with these
principles and applicable privacy requirements, and should appropriately monitor,
audit, and document compliance. Clear R&R for PII for employees/contractors.
Provide appropriate training.
■ Authority- Agencies should only create, collect, use, process, store, maintain,
disseminate, or disclose PII if they have authority to do so, and should identify
this authority in the appropriate notice.
■ Minimization- Agencies should only create, collect, use, process, store,
maintain, disseminate, or disclose PII that is directly relevant and necessary to
accomplish a legally authorized purpose, and should only maintain PII for as long
as is necessary to accomplish the purpose.
■ Quality and Integrity- Agencies should create, collect, use, process, store,
maintain, disseminate, or disclose PII with such accuracy, relevance, timeliness,
and completeness as is reasonably necessary to ensure fairness to the
individual.
■ Individual Participation- Agencies should involve the individual in the process
of using PII and, to the extent practicable, seek individual consent for the
creation, collection, use, processing, storage, maintenance, dissemination, or
disclosure of PII. Agencies should also establish procedures to receive and
address individuals’ privacy-related complaints and inquiries.
■ Purpose Specification and Use Limitation- Agencies should provide notice of
the specific purpose for which PII is collected and should only use, process,
store, maintain, disseminate, or disclose PII for a purpose that is explained in the
notice and is compatible with the purpose for which the PII was collected, or that
is otherwise legally authorized.
■ Security- Agencies should establish administrative, technical, and physical
safeguards to protect PII commensurate with the risk and magnitude of the harm
that would result from its unauthorized access, use, modification, loss,
destruction, dissemination, or disclosure.
■ Transparency- Agencies should be transparent about information policies and
practices with respect to PII, and should provide clear and accessible notice
regarding creation, collection, use, processing, storage, maintenance,
dissemination, and disclosure of PII.
○ European Court of Human Rights Rules on individual or State applications alleging
violations of the civil and political rights set out in the European Convention on Human
Rights.
○ OECD AI Principles Promotes AI that is innovative and trustworthy and that respects
human rights and democratic values. Value-based principles
■ Inclusive growth, sustainable development and well-being Stakeholders
should proactively engage in responsible stewardship of trustworthy AI in pursuit
of beneficial outcomes for people and the planet, such as augmenting human
capabilities and enhancing creativity, advancing inclusion of underrepresented
populations, reducing economic, social, gender and other inequalities, and
protecting natural environments, thus invigorating inclusive growth, sustainable
development and well-being.
■ Human-centred values and fairness AI actors should respect the rule of law,
human rights and democratic values, throughout the AI system lifecycle. These
Include freedom, dignity and autonomy, privacy and data protection,
non-discrimination and equality, diversity, fairness, social justice, and
internationally recognised labor rights. To this end, AI actors should implement
mechanisms and safeguards, such as capacity for human determination, that are
appropriate to the context and consistent with the state of art.
■ Transparency and explainability AI Actors should commit to transparency and
responsible disclosure regarding AI systems. To this end, they should provide
meaningful information, appropriate to the context, and consistent with the state
of art:
● to foster a general understanding of AI systems,
● to make stakeholders aware of their interactions with AI systems,
including in the workplace,
● to enable those affected by an AI system to understand the outcome,
● Contestability to enable those adversely affected by an AI system to
challenge its outcome based on plain and easy-to-understand
information on the factors, and the logic that served as the basis for the
prediction, recommendation or decision.
■ Robustness, security and safety AI systems should be robust (maintains
function even in changed/adversarial circumstances), secure and safe (designed
to minimize potential harm) throughout their entire lifecycle so that, in conditions
of normal use, foreseeable use or misuse, or other adverse conditions, they
function appropriately and do not pose unreasonable safety risk.
● To this end, AI actors should ensure traceability, including in relation to
datasets, processes and decisions made during the AI system lifecycle,
to enable analysis of the AI system’s outcomes and responses to inquiry,
appropriate to the context and consistent with the state of art.
● AI actors should, based on their roles, the context, and their ability to act,
apply a systematic risk management approach to each phase of the AI
system lifecycle on a continuous basis to address risks related to AI
systems, including privacy, digital security, safety and bias.
■ Accountability AI actors should be accountable for the proper functioning of AI
systems and for the respect of the above principles, based on their roles, the
context, and consistent with the state of art.
○ White House Office of Science and Technology Policy Blueprint for an AI Bill of
Rights
■ 5 principles:
● Safe and Effective Systems You should be protected from unsafe or
ineffective systems.
● Algorithmic Discrimination Protections You should not face
discrimination by algorithms and systems should be used and designed
in an equitable way
● Data Privacy You should be protected from abusive data practices via
built-in protections and you should have agency over how data about you
is used.
● Notice and Explanation You should know that an automated system is
being used and understand how and why it contributes to outcomes that
impact you.
● Human Alternatives, Considerations, and Fallback You should be
able to opt out, where appropriate, and have access to a person who can
quickly consider and remedy problems you encounter.
■ European Commission High-level Expert Group AI
● Deliverable 1: Ethics Guidelines for Trustworthy AI The document
puts forward a human-centric approach on AI and list 7 key requirements
that AI systems should meet in order to be trustworthy.
● Deliverable 2: Policy and Investment Recommendations for
Trustworthy AI Building on its first deliverable, the group put forward 33
recommendations to guide trustworthy AI towards sustainability, growth,
competitiveness, and inclusion. At the same time, the recommendations
will empower, benefit and protect European citizens.
● Deliverable 3: The final Assessment List for Trustworthy AI (ALTAI) A
practical tool that translates the Ethics Guidelines into an accessible and
dynamic self-assessment checklist. The checklist can be used by
developers and deployers (users) of AI who want to implement the key
requirements. This new list is available as a prototype web based tool
and in PDF format.
● Deliverable 4: Sectoral Considerations on the Policy and Investment
Recommendations The document explores the possible implementation
of the recommendations, previously published by the group, in three
specific areas of application: Public Sector, Healthcare and
Manufacturing & the Internet of Things.
○ UNESCO Principles
■ Goals: basis for AI systems to work for good of humanity and prevent harm. It
also aims at stimulating the peaceful use of AI systems.”
■ Method: Establish ethical frameworks as well as practical policy
recommendations with a strong emphasis on inclusion / ESG issues.
○ Asilomar AI Principles 23 principles divided into 3 categories developed at a
conference sponsored by the Future of Life Institute (nonprofit)
● IEEE
○ Eight general principles: human rights and well-being, transparency,
accountability, effectiveness, competence and “awareness of misuse” in addition
to “data agency,” giving individuals control over their data
○ Ethics-by-design approach
○ Working group in progress for new standard for AI systems
● CNIL AI Action Plan.
○ Understanding the functioning of AI systems and their impacts on people:
CNIL focusing on a few key AI privacy issues (e.g., public PII web scraping, AI
system user PII protection, DSAR for training AI systems).
○ Supporting innovative players in the AI ecosystem in France and Europe:
call for projects to participate in 2023 regulatory sandbox and dialogue with
developers / R&D centers.
○ Auditing and controlling AI systems: The CNIL plans to develop a tool to audit
AI systems and will continue to investigate complaints lodged with its office
related to AI, including generative AI.

Domain 3: Understanding How Current Laws Apply to AI Systems (10/100)

III.A. Understand the existing laws that interact with AI use (6/100)
● Know the laws that address unfair and deceptive practices.
○ Federal Trade Commission (FTC) Act (US) (Wheeler-Lea Act of 1938)
○ EU Directive on unfair commercial practices from 2005
○ The Children's Online Privacy Protection Act (COPPA), which governs the collection of
information about minors
○ The Gramm Leach Bliley Act (GLBA), which governs personal information collected by
banks and financial institutions
○ Telemarketing Sales Rule (TSR), Telephone Consumer Protection Act of 1991, and the
Do-Not-Call Registry
○ Junk Fax Protection Act of 2005 (JFPA)
○ Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003
(CAN-SPAM) and the Wireless Domain Registry
○ Telecommunications Act of 1996 and Customer Proprietary Network Information (CPNI)
○ Cable Communications Policy Act of 1984
○ Video Privacy Protection Act of 1998 (VPPA) and Video Privacy Protection Act
Amendments of 2012
○ Driver's Privacy Protection Act (DPPA)
● Know relevant non-discrimination laws (credit, employment, insurance, housing, etc.).
○ FCRA regulates “consumer reporting agencies” and users of such reports. GenAI service
could meet this definition if the service regularly produces reports about individuals'
"character, general reputation, personal characteristics, or mode of living" and these
reports are used for employment purposes.”
○ Confidentiality of Substance Use Disorder Patient Records Rule Prohibits patient
information from being used to initiate criminal charges or as a predicate to conduct a
criminal investigation of the patient
○ Equal Credit Opportunity Act - prohibits discrimination against credit applicants on the
basis of race
○ Fair and Accurate Credit Transactions Act of 2009 (FACTA) contains protections against
identity theft, “red flags” rules
○ Privacy Protection Act of 1980 (PPA) The PPA requires law enforcement to obtain a
subpoena in order to obtain First Amendment-protected materials
○ Title VII of the Civil Rights Act of 1964 (“CRA”) prohibits employment discrimination on
the basis of race, color, religion, sex, or national origin
○ Title I of the Americans With Disabilities Act (“ADA”) prohibits employment
discrimination against “qualified” individuals with disabilities
○ Genetic Information Nondiscrimination Act of 2008 (GINA)
○ Illinois Artificial Intelligence Video Interview Act – Requires that any employer relying
on AI technology to analyze a screening interview must providing information to
candidates and obtain consent; must also report demographic data to the state to
analyze bias
○ Maryland HB 1202 – Prohibits the use of facial recognition technology in the hiring
process without consent of applicant
○ NYC Local Law 144 – A bias audit must be conducted on any use of automated
employment decision tools requires; notice must be provided to applicants and alternative
selection process must be provided. NO NOTICE OF BIAS AUDIT REQUIRED
○ The Wiretap Act
● Know relevant product safety laws.
○ Consumer Product Safety Act (CPSA) in 1972 for the purposes of protecting
consumers against the risk of injury due to consumer products, enabling consumers to
evaluate product safety, establishing consistent safety standards, and promoting research
into the causes and prevention of injuries and deaths associated with unsafe products.
○ The General Product Safety Regulation (GPSR) requires that all consumer products
on the EU markets are safe and it establishes specific obligations for businesses to
ensure it. It applies to non-food products and to all sales channels.
● Know relevant IP law.
○ USPTO yet to fully establish guidance on topic but generally recognize IP rights only for
human inventors. European Patent Office and EU IP Office similar.
● Understand the basic requirements of the EU Digital Services Act (transparency of
recommender systems).
○ In force since Feb ‘24. The DSA imposes obligations on all information society services
that offer an intermediary service to recipients who are located or established in the EU,
regardless of whether that intermediary service provider is incorporated or located within
the EU.
■ Transparency obligations: Advertising, user profiling, and recommender
systems
● Under Article 26, providers of online platforms must supply users with
information relating to any online advertisements on its platform so that
the recipients of the services can clearly identify that such information
constitutes an advertisement. Providers of online platforms are prohibited
from presenting targeted advertisements based on profiling using either
the personal data of minors or special category data (as defined in the
GDPR).
● Article 27 requires providers of online platforms that use
recommendation systems to set out in their T&Cs the main parameters
they use for such systems, including any available options for recipients
to modify or influence them. Under Article 38, VLOPs and VLOSEs must
provide at least one option (not based on profiling) for users to modify the
parameters used.
● Know relevant privacy laws concerning the use of data.
○ The Federal (US) Privacy Act of 1974 and the E-Government Act of 2002 require
agencies to address the privacy implications of any system that collects identifiable
information on the public
○ HIPAA - health info. HITECH (2009) increased HIPAA penalties and gave individuals
greater access rights.
○ FERPA - student edu records
○ Protection of Pupil Rights Amendment of 1978 (PPRA), which prevents the sale of
student information for commercial purposes
○ CCPA/CPRA, Virginia Consumer Data Protection Act (VCDPA), CPA, CTDPA, Montana’s
Consumer Data Privacy Act, Delaware Personal Data Privacy Act, Utah Consumer
Privacy Act (UCPA), Oregon Consumer Privacy Act (OCPA), Iowa’s Consumer Data
Protection Act (ICDPA), New Jersey Data Privacy Act (NJDPA), Indiana Consumer Data
Protection Act, Tennessee Information Protection Act, Texas Data Privacy and Security
Act (TDPSA)
III.B. Understanding key GDPR intersections (3/100)
● Understand automated decision making, data protection impact assessments, anonymization,
and how they relate to AI systems
○ Automated decision making
■ GDPR Art. 22 right not to be subj to decision solely on automated
decisionmaking (narrow exceptions - consent, protect rights, contract)
■ Impact on AI systems: many AI systems do just that
○ DPIA
■ GDPR Art. 35 mandates DPIA for high risk processing (systematic processing on
large scale, SPII etc). Needs to include data processing, purposes,
proportionality, risks/rights of data subjects, mitigations.
■ Impact on AI systems: DPIA may be required for training certain datasets. ≠
conformity assessment required.
○ Anonymization
■ GDPR no explicit reference to anonymization, but “anonymized data” would not
be PII. Anonymizing the training data can help to mitigate concerns by separating
the information from the person.
■ Article 10 of EU AI Act has de-biasing exception to GDPR ban on processing
sensitive data. (source)
● Understand the intersection between requirements for AI conformity assessments and
DPIAs.
○ EU AI Act requires conformity assessments for HRAIs to meet Articles 13-23 requirement
for HRAIs before bringing to market.
■ Requirements
● Art. 13 - data/data governance
● Art. 14 - human oversight
● Art. 15 - technical documentation
● Art. 16 - risk mgmt
● Art. 17 - incident reporting
● Art. 18 - risk mitigation techniques
● Art. 19 - record keeping
● Art. 20 - accuracy, robustness, cybersecurity
● Art. 21 - transparency and provision of information
● Art. 22 - prohibition of manipulation techniques
● Art. 23 - stakeholder consultation
■ GDPR explicitly mentioned when it comes to PII processing. HRAI may need
DPIA and CA.

DPIA CA

Purpose Require controllers to access Ensure compliance with specific


and make decisions based on legal requirements, considered
risks. Hold controllers mitigations for HRAI
accountable for actions.

Controller free decide if and how

High risk processing Required under GDPR for Likely required


high-risk, HRAI not always but
likely to be best practice
GDPR EU AI Act

Human oversight GDPR Art. 22 - users have right EU AI Act Art. 14 - need to
to know about automated ensure effective human
decision, logic, challenge for all oversight over HRAIs. Help
AI systems ensure decisions fair,
transparent and unbiased.

Right to information / GDPR Art. 22 - needs Art. 13 - transparency requires


explainability meaningful info about the logic providing users with clear info
involved (info about algo rather about how HRAIs function and
than justifying rationale of using type of data they use. (basic
one). Info about input data and understanding about system
general parameters, but not decision making process)
source code or how / why a
certain specific decision was Art. 15 - tech documentation -
made. requires providers to create
comprehensive technical
documentation for HRAIs. Detail
system design, development,
operation and performance. May
be accessed by
regulators/auditors.

Art. 21 - transparency and


provision of information builds
on art 13 and delves deeper into
specific requirements (mandates
providing info on logic behind
decisions, may involves
explanation for why system
reached particular conclusion)

III.C. Understanding liability reform (1/100)


● Awareness of the reform of EU product liability law.
○ Article 4 of the proposed Directive brings software into the scope of EU product liability
laws. Operating systems, firmware, computer programs and applications and AI systems
are all expressly included (by Recital 12).
○ Article 7 extends liability to manufacturers of defective components, distributors, fulfilment
service providers and online platforms.
○ Articles 8 and 9 provide a disclosure regime and set of rebuttable presumptions designed
to assist claimants.
● Understand the basics of the AI Product Liability Directive. (still being considered)
○ “Complements the Artificial Intelligence Act by introducing a new liability regime that
ensures legal certainty, enhances consumer trust in AI, and assists consumers’ liability
claims for damage caused by AI-enabled products and services.It applies to AI systems
that are available on the EU market, or operating within the EU market.”
○ Without this, victims need to prove wrongful action/omission which is hard in AI. High
costs and drawn out legal proceedings could deter victims. Also caused business
uncertainty.
○ Complement AI Act (introduces risk mitigation requirements) by providing pathway to
individual relief for those who have suffered damage by AI. Basic discovery (and court
mandates to produce info) rights and ability to invoke favorable rules of national law
● Awareness of U.S. federal agency involvement (EO14091)
○ Further Advancing Racial Equity and Support for Underserved Communities
Through the Federal Government requires federal agencies to integrate equity / consult
civil rights offices when planning and decision-making for AI / automated systems.
○ Algorithmic discrimination = when automated systems contribute to unjustified different
treatments disfavoring people based on race/color/ethnicity etc.

Domain 4: Understanding the Existing and Emerging AI Laws and Standards (12/100)

IV.A Understanding the requirements of the EU AI Act (5/100)


● Understand the classification framework of AI systems (prohibited, high-risk, limited risk, low risk).
○ Unacceptable- significant threat to fundamental rights, democracy, social values. May
compromise critical infrastructure. DO NOT USE
○ High Risk- critical sectors such as healthcare, transportation, and law enforcement.
Requires conformity assessment prior to use.
○ Limited Risk- considered less risky than their high-risk counterparts and thus face fewer
regulatory constraints. However, while they do not require the same level of scrutiny, they
must still adhere to Article 13 transparency obligations to maintain accountability and
trustworthiness in their deployment. This means that the developers and operators of
these systems must be able to provide clear explanations of how the system works, what
data it uses, and how it makes decisions (e.g., video games, spam filters).
○ Minimal risk - unregulated.
○ General purpose AI models - AI models that display significant generality / capable of
performing wide range of tasks. Stand separately from the above risk-based framework.
Requirements below:
■ Art. 45 - technical documentation detailing training data, purposes ~HRAI
■ Art. 46 - info sharing. When GPAI integrated into HRAI, cooperate and provide
relevant info to the HRAI developer
■ Art 13 transparency obligations
■ Specific details TBD but likely ~CAs, monitor and mitigate risks, accuracy,
robustness, security requirements.
● Understand requirements for high-risk systems and foundation models
● Understand notification requirements (customers and national authorities).
○ “The AI Act requires developers of high-risk AI to set up a reporting system for serious
incidents as part of wider post-market monitoring. A serious incident is defined as an
incident or a malfunction that led to, might have led or might lead to serious
damage to a person’s health or their death, serious damage to property or the
environment, the disruption of critical infrastructure or the violation of fundamental
rights under EU law. Developers and, in some cases, deployers must notify the
relevant authorities and maintain records and logs of the AI system’s operation at
the time of the incident to demonstrate compliance with the AI Act in case of ex-post
audits of incidents. “ (Source)
● Understand the enforcement framework and penalties for noncompliance.
○ EU AI Act requires establishment of national authorities to adjudicate cases
○ Penalties for non-compliance follow a three-tiered system, with more severe violations of
obligations and requirements carrying heftier penalties. (source). Established AI office to
adjudicate.
○ The heftiest fines are imposed for violations related to prohibited systems of up to
€35,000,00 or 7% of worldwide annual turnover for the preceding financial year,
whichever is higher. (source)
○ Tier 2 - noncompliance w obligations
■ HRAI providers
● Continuously monitor (art 9) through entire lifecycle
● Art 10 - data governance (quality criteria)
● Art 11 - technical documentation to demonstrate compliance
● Art 12 - automatic recording of events for id of possible breaches
● Art 13 - transparency (clear instructions and clarity that AI is being used)
● Art 14 - appropriate human interface tools
● Art 15 - accuracy, robustness and security
● Keeping documentation, logs, corrective actions when appropriate,
drawing of conformity, affixing CE mark to demonstrate conformity,
■ Obligations of authorized rep - must act in accordance with instructions from
provider
■ Also obligations for importers and distributors of HRAIs
■ Deployers
● TOMs to use according to instructions
● Ensuring people doing oversight have necessary skills and training
● Ensuring that input data is sufficiently representative
● Monitoring HRAI use
● Keeping logs that are under deployer control for appropriate time or at
least 6mo.
■ Obligations for notified bodies (runs CAs)
○ The lowest penalties for AI operators are for providing incorrect, incomplete, or
misleading information, up to €7,500,000 or 1% of total worldwide annual turnover for the
preceding financial year, whichever is higher.
○ Penalties for non-compliance can be issued to providers, deployers, importers,
distributors, and notified bodies. (Source)
● Understand procedures for testing innovative AI and exemptions for research.
○ An exception within the AI Act to process special categories of personal data to detect
and correct bias within AI applies to providers of AI systems.
○ “the AI Act's exception applies to developers and entities outsourcing the development of
AI systems, for non-private use. The exception does not seem to apply to organizations
renting a fully developed AI system as a service, for example.” (source)
● Understand transparency requirements, i.e., registration database.
○ High-risk AI systems must be registered in an EU-wide public database ○ obligation to
warn people that they are interacting with an AI system.

IV.B. Understand other emerging global laws (3/100)


● Understand the key components of Canada’s Artificial Intelligence and Data Act (C-27).
(source)
○ “AIDA provides a definition of “person” that includes trusts, partnerships,
unincorporated associations and any other legal entity, and further clarifies when a
such a “person” will be considered responsible for an AI system. A person becomes a
“person responsible” for an AI system if they design, develop, make available for
use, or manage the operation of an AI system in the course of international or
interprovincial trade and commerce.”
○ Responsibilities include:
■ ensuring the anonymization of data
■ conducting assessments to determine whether an AI system is “high-impact,”
■ establishing measures related to risks
■ monitoring and keeping records on risk mitigation
■ requirements for organizations to publish a plain-language description of all
high-impact AI systems on a public website.
○ If adopted, will replace PIPEDA. Amendments being considered, bill pending.
● Understand the key components of U.S. state laws that govern the use of AI.(source)
○ Algorithmic Discrimination- an automated decision tool's differential treatment of an
individual or group based on their protected class.Bills that address this place the
burden on AI developers and businesses using AI, often referred to as deployers, to
proactively ensure that the technologies are not creating discriminatory outcomes
in the consumer and employment context
○ “Provisions found in most of these bills require regular impact assessments of AI tools
to ensure against discrimination; disclosure of such assessments to government
agencies; internal policies, programs and safeguards to prevent foreseeable risks
from AI; accommodating requests to opt-out of being subject to AI tools; disclosure
of the AI's use to affected persons; and an explanation of how the AI tool uses
personal information and how risks of discrimination are being minimized”
○ California (lost steam), Connecticut, Vermont, Hawaii, Illinois, New York, Oklahoma,
Rhode Island (lost steam), and Washington (lost steam)
■ Automated employment decision tools- "predictive data analytics" used by
employers to make employment decisions about hiring, firing, promotion and
compensation.
■ “require employers to provide advance notice to and obtain consent from job
applicants and employees who are subject to AEDTs, explain the qualifications
and characteristics that AI will assess to candidates, and conduct and
disclose regular impact assessments or bias audits of AI tools. Most of these
bills, however, include carveouts for the use of AI when promoting diversity or
affirmative action initiatives.”
○ Illinois, Massachusetts, New Jersey, New York, Vermont
■ AI Bill of Rights “provide state residents the rights to know when they are
interacting with AI, to know when their data is being used to inform AI, not to be
discriminated against by the use of AI, to have agency over their personal
data; to understand the outcomes of an AI system impacting them and to opt
out of an AI system”
○ Oklahoma and New York
■ Working Group Bills “creating government commissions, agencies or working
groups to study the implementation of AI technologies and develop
recommendations for future regulation”
○ Utah, Florida, Hawaii, Massachusetts
● Understand the Cyberspace Administration of China’s regulations on generative AI. (source)
(in effect from August 2023)
○ “The requirements of this regulation will apply to domestic companies and to overseas
generative AI service providers offering generative AI services to general public in
China. It is important to note also that the Generative AI Measures apply to services
offered to the public and not the use of generative AI services by enterprises.”
■ Required to ensure minors do not get addicted. No parental notice required
○ In the development and use of generative AI services, generative AI service providers
must:
■ not generate illegal content such as false or harmful information;
■ take effective measures to prevent the generation of discriminatory content;
■ not use advantages in algorithms, data, or platforms where this leads to
monopoly and unfair competitive behaviors;
■ not infringe on others’ portrait rights, reputation rights, honor rights, privacy rights
and personal information rights; and
■ take effective measures based on service types to increase the transparency of
generative AI services and the accuracy and reliability of generative AI content.
■ In respect of the training data, generative AI service providers must:
● use data and foundation models from legitimate sources;
● not infringe others’ legally owned intellectual property;
● obtain personal data with consent or under situations prescribed by the
law or administrative measures; and
● take effective measures to increase the quality of training data, their
truthfulness, accuracy, objectivity and diversity.
■ When providing generative AI services, generative AI service providers bear
cybersecurity obligations as online information content producers and personal
information protection obligations as personal information handlers and must:
● enter into service agreements with registered generative AI service users
which specify the rights and obligations of both parties;
● guide users on the legal use of generative AI technology and take
effective measures to prevent users from over-reliance on or “addiction
to” the generated AI service;
● not collect non-essential personal information, not illegally retain input
information and usage records which can be used to identify a user and
not illegally provide users’ input information and usage records to others;
● receive and settle data subjects’ requests;
● tag generated content such as photos and video as pursuant to the
Administrative Provisions on Deep Synthesis of Internet-based
Information Services (Deep Synthesis Provisions);
● if illegal content is discovered, take measures to stop the generation and
transmission of and delete illegal content, take rectification measures
such as model improvement, and report to the relevant competent
authorities;
● where users are found to use generative AI services to conduct illegal
activities, take measures to warn the user, or restrict, suspend or
terminate the service, retain the records, and report to the relevant
competent authorities; and
● establish a mechanism for receiving and handling users’ complaints. ○ In
relation to other legal obligations and enforcement supervision,
generative AI service providers shall:
● if the generative AI service comes with a public opinion attribute or social
mobilization ability, carry out a safety assessment obligation and (within
ten working days from the date of provision of services) go through
record-filing formalities pursuant to the Administrative Provisions on
Algorithm Recommendation for Internet Information Services (Algorithm
Provisions); and
● when the relevant competent authorities (e.g., the CAC) commence
supervisory checks on the generative AI service, cooperate with them,
explain the source, size and types of the training data, tagging rules and
the mechanisms and principles of the algorithm and provide necessary
technology and data, etc., for support and assistance.
● Extraterritorial scope

IV.C Understand the similarities and differences among the major risk management frameworks and
standards (4/100)
● ISO 31000:2018 Risk Management – Guidelines.
○ “A management system is the framework of policies, processes and procedures
employed by an organization to ensure that it can fulfill the tasks required to achieve its
purpose and objectives.” (source)
○ Governance and culture; strategy and objective-setting; performance; information,
communications and reporting; and the review and revision of practices to enhance the
performance of the organization.
○ Emphasis on leadership endorsement and engagement, emphasis on organizational
governance, emphasis on iterative nature of risk management (regularly updating
processes and policies in response to new industry
developments)

● United States National Institute of Standards and Technology, AI Risk Management


Framework (NIST AI RMF).
○ Voluntarily used to improve the ability to incorporate trustworthiness considerations into
the design, development, use, and evaluation of AI products, services, and systems.
○ Emphasis on documentation; not just high-risk systems, assess risk from individual→
planetary. Governance being cornerstone (source)
○ The framework breaks down the AI risk management process into four core functions:
"govern," "map," "measure," and "manage (source)
○ seven “characteristics of trustworthy AI,” which include: valid and reliable, safe, secure
and resilient, accountable and transparent, explainable and interpretable,
privacy-enhanced, and fair with harmful biases managed.(source)
● European Union proposal for a regulation laying down harmonized rules on AI (EU AIA). (source)
○ European law on artificial intelligence (AI) adopted May ‘24 – the first comprehensive law
on AI by a major regulator anywhere
○ The majority of obligations fall on providers (developers) of high-risk AI systems.
○ Users are natural or legal persons that deploy an AI system in a professional capacity,
not affected end-users.
○ All GPAI model providers must provide technical documentation, instructions for use,
comply with the Copyright Directive, and publish a summary about the content used for
training.
○ Free and open licence GPAI model providers only need to comply with copyright and
publish the training data summary, unless they present a systemic risk. ○ All providers of
GPAI models that present a systemic risk – open or closed – must also conduct model
evaluations, adversarial testing, track and report serious incidents and ensure
cybersecurity protections.
○ Outlines “prohibited” AI systems
● Council of Europe Human Rights, Democracy, and the Rule of Law Assurance Framework
for AI Systems (HUDERIA).
○ Human Rights, Democracy, and the Rule of Law Impact Assessment (HUDERIA)
○ “define a methodology to carry out impact assessments of Artificial Intelligence (AI)
applications from the perspective of human rights, democracy, and the rule of law, based
on relevant Council of Europe (CoE) standards and the work already undertaken in this
field at the international and national level…, and to develop an impact assessment
model.” (source)
● IEEE 7000-21 Standard Model Process for Addressing Ethical Concerns during System
Design (Source)
○ “integrates ethical and functional requirements in systems engineering design and
development in order to mitigate risk and increase innovation” (source) ○ “A set of
processes by which organizations can include consideration of ethical values throughout
the stages of concept exploration and development is established by this standard.
Management and engineering in transparent communication with selected stakeholders
for ethical values elicitation and prioritization is supported by this standard, involving
traceability of ethical values through an operational concept, value propositions, and
value is positions in the system design. Processes that provide for traceability of ethical
values in the concept of operations, ethical requirements, and ethical risk-based design
are described in the standard. All sizes and types of organizations using their own life
cycle models are relevant to this standard.” (source)
● ISO/IEC Guide 51 Safety aspects – guidelines for their inclusion in standards.
○ Basic safety
○ Group safety
○ Product safety
○ Standards containing safety aspects
● Singapore Model AI Governance Framework. (source)
○ provides detailed and readily-implementable guidance to private sector organizations to
address key ethical and governance issues when deploying AI solutions
○ Decisions made by AI should be: EXPLAINABLE, TRANSPARENT & FAIR
○ AI systems should be HUMAN-CENTRIC
Domain 5: Understanding the AI Development Life Cycle (8/100)
Describes the AI life cycle and the broad context in which the AI risks are managed

V.A Understand the key steps in the AI system planning phase (2/100)
● Determine the business objectives and requirements.
● Determine the scope of the project.
● Determine the governance structure and responsibilities.

V.B Understand the key steps in the AI system design phase (2/100)
● Implement a data strategy that includes: Data gathering, wrangling, cleansing, labeling. Applying
PETs like anonymization, minimization, differential privacy, federated learning. Determine AI
system architecture and model selection (choose the algorithm according to the desired level of
accuracy and interpretability).

V.C Understand the key steps in the AI system development phase (2/100)
● Build the model.
● Perform feature engineering.
● Perform model training.
● Perform model testing and validation.

V.D Understand the key steps in the AI system implementation phase (2/100)
● Perform readiness assessments.
● Deploy the model into production.
● Monitor and validate the model.
● Maintain the model.

Domain 6: Implementing Responsible AI Governance and Risk Management (27/100)


Explains how the major AI stakeholders collaborate, in a layered approach, to manage AI risks while
fulfilling the potential benefits AI systems have for society

Responsible AI = Trustworthy / ethical AI. principles-based AI governance.

AI Governance system of policies, laws, regulations across international, national and organizational
levels. Helps stakeholders implement and oversee use of AI while mitigating for risks and ensuring AI
aligns with objectives / done responsibly and ethically

VI.A Ensure interoperability of AI risk management with other operational risk strategies (2/100)
● Ex. security risk, privacy risk, business risk.
○ “The AI models constitute valuable intellectual assets, demanding features that prevent
unauthorized access or tampering.” (source)
○ “Depending on the sector—such as healthcare or finance—the stack must be compliant
with industry-specific regulations like HIPAA or PCI-DSS” (source)

VI.B Integrate AI governance principles into the company (2/100)


● Adopt a pro-innovation mindset.
● Ensure governance is risk-centric.
● Ensure planning and design is consensus-driven.
● Ensure the team is outcome-focused.
● Adopt a non-prescriptive approach to allow for intelligent self-management.
● Ensure the framework is law-, industry-, and technology-agnostic.
○ IAPP best practices: keep up with legal changes, designate specific team and person,
implement internal policy and principles, review supplier and partner principles and
policies, targeted training, PETs, measure performance of program, employees, external
advisory board,

VI.C Establish an AI governance infrastructure (5/100)

● Determine if you are a developer, deployer (those that make an AI system available to third
parties) or user; understand how responsibilities among companies that develop AI systems and
those that use or deploy them differ; establish governance processes for all parties; establish
framework for procuring and assessing AI software solutions.
● Establish and understand the roles and responsibilities of AI governance people and groups
including, but not limited to, the chief privacy officer, the chief ethics officer, the office for
responsible AI, the AI governance committee, the ethics board, architecture steering groups, AI
project managers, etc.
● Advocate for AI governance support from senior leadership and tech teams by: ○ Understanding
pressures on tech teams to build AI solutions quickly and efficiently.
○ Understanding how data science and model operations teams work.
○ Being able to influence behavioral and cultural change.
● Establish organizational risk strategy and tolerance.
● Develop central inventory of AI and ML applications and repository of algorithms.
● Develop responsible AI accountability policies and incentive structures.
● Understand AI regulatory requirements.
● Set common AI terms and taxonomy for the organization.
● Provide knowledge resources and training to the enterprise to foster a culture that continuously
promotes ethical behavior.
● Determine AI maturity levels of business functions and address insufficiencies.
● Use and adapt existing privacy and data governance practices for AI management.
● Create policies to manage third party risk, to ensure end-to-end accountability.
● Understand differences in norms/expectations across countries.

VI.D Map, plan and scope the AI project (6/100)


● Define the business case and perform cost/benefit analysis where trade-offs are considered in
the design of AI systems. Why AI/ML?
● Identify and classify internal/external risks and contributing factors (prohibitive, major, moderate).
● Construct a probability/severity harms matrix and a risk mitigation hierarchy.
● Perform an algorithmic impact assessment leveraging PIAs as a starting point and tailoring to AI
process. Know when to perform and who to involve.
● Establish level of human involvement/oversight in AI decision making.
● Conduct a stakeholder engagement process that includes the following steps:
○ Evaluate stakeholder salience.
○ Include diversity of demographics, disciplines, experience, expertise and backgrounds.
○ Perform positionality exercise.
○ Determine level of engagement.
○ Establish engagement methods.
○ Identify AI actors during design, development, and deployment phases.
○ Create communication plans for regulators and consumers that reflect
compliance/disclosure obligations for transparency and explainability (UI copy, FAQs,
online documentation, model or system cards).
○ Determine feasibility of optionality and redress.
○ Chart data lineage and provenance, ensuring data is representative, accurate and
unbiased.
○ Use statistical sampling to identify data gaps.
○ Solicit early and continuous feedback from those who may be most impacted by AI
systems.
○ Use test, evaluation, verification, validation (TEVV) process.
■ Pre processing = steps prior to ML model running eg data cleanup, handling
missing values, normalization, feature extraction, encoding categorical variables.
■ Post processing = steps after ML model has been run to adjust output of model
(eg to improve fairness, meet business requirements etc)
■ Validation data = subset of dataset used to assess performance of ML model
during training phase.
○ Create preliminary analysis report on risk factor and proportionate management.
○ Others
■ Data minimization
■ Data retention
■ Supervision and continuous improvement
■ Safeguards for AI model risks
■ If model contains PII that can be re-identified by the data controller, should be
available for data subject access rights

VI.E Test and validate the AI system during development (6/100)

● Evaluate the trustworthiness, validity, safety, security, privacy and fairness of the AI system using
the following methods:
● Use edge cases, unseen data, or potential malicious input to test the AI models. • Conduct
repeatability assessments.
● Complete model cards/fact sheets.
○ Validation tool that lays out conditions, edge cases and unseen data to serve as
documentation of AI tooll’s credibility
● Create counterfactual explanations (CFEs).
● Conduct adversarial testing and threat modeling to identify security threats.
● Refer to OECD catalog of tools and metrics for trustworthy AI.
● Establish multiple layers of mitigation to stop system errors or failures at different levels or
modules of the AI system.
● Understand trade-offs among mitigation strategies.
● Apply key concepts of privacy-preserving machine learning and use privacy-enhancing
technologies and privacy-preserving machine learning techniques to help with privacy protection
in AI/ML systems.
● Understand why AI systems fail. Examples include: brittleness; hallucinations; embedded bias;
catastrophic forgetting; uncertainty; false positives.
● Determine degree of remediability of adverse impacts.
● Conduct risk tracking to document how risks may change over time.
● Consider, and select among different deployment strategies.
● Use same features for training and testing data

VI.F Manage and monitor AI systems after deployment (6/100)


● Perform post-hoc testing to determine if AI system goals were achieved, while being aware of
″automation bias.″
● Prioritize, triage and respond to internal and external risks.
● Ensure processes are in place to deactivate or localize AI systems as necessary (e.g., due to
regulatory requirements or performance issues).
● Continuously improve and maintain deployed systems by tuning and retraining with new data,
human feedback, etc.
● Determine the need for challenger models to supplant the champion model.
● Version each model and connect them to the data sets they were trained with.
● Continuously monitor risks from third parties, including bad actors.
● Maintain and monitor communication plans and inform user when AI system updates its
capabilities.
● Assess potential harms of publishing research derived from AI models.
● Conduct bug bashing and red teaming exercises.
○ Red teaming = pretend to be an enemy. Helpful as traditional AI security testing may not
go there
● Forecast and reduce risks of secondary/unintended uses and downstream harm of AI models.
Domain 7: Contemplating ongoing issues and concerns (6/100)
Presents some of the current discussions and ideas about AI governance

VII.A Awareness of legal issues (2/100)


● How will a coherent tort liability framework be created to adapt to the unique circumstances of AI
and allocate responsibility among developers, deployers and users?
○ Web-scraping to train AI
■ In the US
● Computer Fraud and Abuse Act (1986), web data scraping expanded by
U.S. Patriot Act in 2001 and, most recently, through passage of the
Identity Theft Enforcement and Restitution Act in 2008.
● Within the CCPA Final Regulations, approved by the California Office of
Administrative Law in March, Section 7012(h) further clarifies that: "A
business that neither collects nor controls the collection of personal
information directly from the consumer does not need to provide Notice
at Collection to the consumer if it neither sells nor shares the consumer's
personal information."
● Thus, according to a blog by Nate Garhart, special counsel at Farella,
Braun, and Martel, no notice needs to be provided for:
○ A data scraper that does not sell the scraped personal
information.
○ A data scraper that uses the scraped information for their own
purposes, even for marketing to identified customers.
○ A data scraper that collects data, deidentifies it, and then sells
the deidentified collection of data.
● On the other hand, according to Garhart, a scraper selling collections of
scraped data that include personal information would be subject to the
requirement to provide notice at collection. This may apply to AI products
trained on personal data scraped from the web.
■ In the EU The GDPR outlines six lawful bases that can justify data collection
and processing: consent, contract, legal obligation, vital interest, public
task and legitimate interest. Under most interpretations of the GDPR, these
Article 6 requirements apply whether the information is obtained from a
publicly accessible source or collected directly from the data subject.
● Remains challenging in the EU: fines issued and legitimate interest basis
rejected, although fines later overturned. Publicly available PII still subj to
GDPR protections (by default no, unless there is a basis)
● What are the challenges surrounding AI model and data licensing? (source) ○ Intellectual
Property (IP) (source)
○ What intellectual property (IP) rights exist in the output of an AI model?
○ Can AI models create trade secrets, copyrights, or even inventive subject matter? If so,
who owns the associated IP rights? The owner could be the licensor or licensee (or
maybe the model itself?)
■ IP ownership - so far USPTO and others assert only humans can own IP
■ Performance and Reliability Licensees should insist on minimum performance
metrics to ensure that the licensed model provides adequate accuracy, reliability,
and robustness. If the model does not perform as expected, the business
consequences may be severe, and could lead to litigation. Licensees should craft
warranties and indemnities to ensure that they do not unreasonably bear the risk
of underperformance.
■ Data Protection and Confidentiality While data transfers are a standard
component of software licenses, using licensee data to improve AI products
presents additional privacy and cybersecurity concerns beyond those of a typical
software agreement.
■ Licensees should ensure that any uses of data are consistent with applicable
privacy laws and consistent with the privacy notices provided to their users.
Numerous states are adopting laws relating to AI and automated
decision-making, so licensees will need to stay updated on developments in this
area to ensure ongoing compliance.
○ Can AI models use copyrighted material as fair use?
■ Precedent - using copyrighted work to power search = fair use (Google)
■ Stable Diffusion, etc - LAION-5B (indiscriminate scraping of the web)
■ Stability AI allowing artists to opt-out but putting onus on copyright holders
■ TOS
● Providers need to attest to proper licensure of their inputs
● Broad indep for infringement from input license holders
○ AI being used to manage IP portfolios
■ E.g., IBM Watson using copyrighted material (existing patents) to inform IP
insights
○ AI can create proliferation of infringing content making enforcement more difficult
● Can we develop systems that respect IP rights?
○ “The resulting generative AI models need not be trained from scratch but can build upon
open-source generative AI that has used lawfully sourced content. This would
enable content creators to produce content in the same style as their own work with an
audit trail to their own data lake, or to license the use of such tools to interested parties
with cleared title in both the AI’s training data and its outputs” (source)
○ Tangent argument: “AI systems can also generate new works protectable by copyright,
such as creating new artwork or music. However, most copyright statutes do not yet not
clearly define who owns machine-generated works.“ (source)

VII.B Awareness of user concerns (2/100)


● How do we properly educate users about the functions and limitations of AI systems?
○ AI Literacy - “equipping individuals with the knowledge and skills to understand, use, and
interact with AI responsibly and effectively. It's about enabling people to make informed
decisions about AI technologies, understand their implications, and navigate the ethical
considerations they present.” (Source)
● How do we upskill and reskill the workforce to take full advantage of AI benefits? (source)
○ By nurturing complementary skills, it is possible to create a workforce that seamlessly
integrates with AI, leveraging its capabilities to amplify human potential.
○ Creativity and Innovation through Critical Thinking and Problem-Solving by leveraging
AI's insights to develop groundbreaking solutions
○ Communication and Collaboration by translating AI's findings into actionable strategies
○ Emotional Intelligence and Social Skills to foster trust and collaboration in AI-driven
environments
○ Ethical Decision-Making and Bias Awareness to be mindful of ethical implications and
social impact, to ensure that AI is used responsibly and ethically
○ While reskilling is the process of learning new skills to adapt to new job requirements,
upskilling helps enhance existing skills to improve one's performance in their current job.
There are several reasons why reskilling and upskilling are important in the AI-driven
workplace, such as:
○ Staying competitive: Workers with the latest skills are more likely to be hired and
promoted.
○ Increasing job satisfaction: Workers who are challenged and engaged in their work are
more likely to be satisfied with their jobs.
○ Improving productivity: Workers with the right skills are more productive and efficient.
○ Reducing the risk of job displacement: Workers who are adaptable and can learn new
skills are less likely to be displaced by automation.
○ Here are a few ways in which people leaders can leverage AI / ML technology for skilling:
■ Assessing skill gaps: Identify the skills that are needed for future jobs in the
organization.
■ Developing training programs: Create or partner with training providers to offer
training programs that teach the skills that are needed.
■ Providing financial assistance: Offer financial assistance to employees who are
participating in training programs.
■ Promoting a culture of lifelong learning: Encourage employees to take advantage
of learning opportunities.
○ Can there be an opt-out for a non-AI alternative? “You should be able to opt out, where
appropriate, and have access to a person who can quickly consider and remedy
problems you encounter. You should be able to opt out from automated systems in favor
of a human alternative, where appropriate.” (source)
○ AI issues in the workplace (source)
■ Personal data disclosure to AI tools
● Security risk, lose control over data
● TOS vet, but also make sure security robust via VSA
● Deidentifying can help (since privacy law does not regulate deidentified
data) bar is relatively high
■ AI tool processing may violate existing data processing commitments / regulatory
requirements
● Lawful basis / notice requirements
■ Must determine how to comply with data access requests in compliance with
existing law (e.g., deletion may be hard)

VII.C Awareness of AI auditing and accountability issues (2/100)


● How can we build a profession of certified third-party auditors globally – and consistent
frameworks and standards for them?
● What are the markers/indicators that determine when an AI system should be subject to
enhanced accountability, such as third-party audits (e.g., automated decision-making, sensitive
data, others)?
● How do we enable companies to remain productive using automated checks for AI governance
and associated ethical issues, while adapting this automation quickly to the evolving standards
and technology?

Other Notes:
One of the findings from the IAPP Privacy and Consumer Trust Report concerned a set of behaviors
referred to as privacy self-defense. These include deciding against an online purchase, deleting a
smartphone app or avoiding a particular website due to privacy concerns. When consumers lose trust in
how their data is being collected and used, they are more likely to engage in these self-defensive
behaviors to protect their privacy.

The European Commission signed off on its Ethical Guidelines for Trustworthy AI which outlines “four
principles or “ethical imperatives” call for AI systems to respect human autonomy, prevent harm,
incorporate fairness and enable explicability. Another layer of guidance advises that AI respect human
dignity, individual freedom, democracy, justice, the rule of law, equality, non-discrimination, solidarity and
citizens’ rights.” (source)

SB 1047

Applies to frontier models (exceeding certain computational thresholds, capable of generating text, code,
audio or visual content) that is accessible to CA users.

Frontier models
● Thresholds still being discussed
● GPT-4, PaLM 2, DALL-E 2, Stable Diffusion, Codex

Key obligations:
● Registration - register frontier AI models with CA Dept of Technology
● Risk Assessment - comprehensive risk assessment must be conducted to identify harms/biases
● Mitigation - developers must implement measures to mitigate identified risks and biases such as
○ Data quality and diversity checks
○ Bias detection and correction
○ Transparency about model limitations
● User notification - users must be informed about potential risks/limitations
● Compliance monitoring - systems to monitor and ensure compliance

Significant fines for failure to comply


● 25k per violation, max 100k per model
● CA AGG can seek injunction, w fees and costs.

Open Source
● Challenges
○ Data provenance, transparency, legality etc
○ Evaluation (testing/monitoring)
○ Domain expertise

You might also like