Short Form AIGP Study Outline
Short Form AIGP Study Outline
Objective / Generates new, original content or data Predicts and analyzes existing patterns
Function or outcomes
Learning Learns patterns and relationships in Learns from historical data to make
process data predictions
Challenges May lack specificity in output Limited to existing patterns, may miss
novel scenarios
I.D. Understand the history of AI and the evolution of data science (2/100)
II.A. Understand the core risks and harms posed by AI systems (4/100)
● Understand the potential harms to an individual (civil rights, economic opportunity, safety)
○ Inaccuracy leading to human rights violations
■ Implicated for crimes they did not commit
■ Civil liberties curtailed by facial recognition from law enforcement. Excessive
surveillance.
■ Chilling effect on public discourse and activism
○ Lack of informed consent for decisionmaking and data collection
■ Unforeseen downstream use of already-shared data
■ Lack of awareness that collection is taking place.
○ Security - bad actors can take sensitive SPII (e.g., facial).
● Understand the potential harms to a group (discrimination towards sub-groups).
○ Disproportionate impact on people of color
■ Accuracy - unreliability of FBI records impacing POC job seekers.
■ Stereotypes built into AI
■ Disproportionate data collection (e.g., crime registries containing more POC)
● Understand the potential harms to society (democratic process, public trust in governmental
institutions, educational access, jobs redistribution).
○ Concentration of power in hands of a few (govts, large tech cos, developed countries)
○ Disinformation online
● Understand the potential harms to a company or institution (reputational, cultural, economic,
acceleration risks).
○ Sensitive Data Exposure: Unintended exposure of confidential information, including
customer and business data, posing risks of identity theft, financial fraud, and loss of
public trust. EO prioritizes privacy/security, indicating closer scrutiny.
○ Cybersecurity Vulnerabilities: Integration of AI with entities’ institutional platforms can
create entry points for hackers, risking not just data theft but also potential disruption of
operations, particularly supply chains.
○ Data Control Concerns: Relying on external AI solutions can lead to issues with data
control and governance, and can potentially expose companies to additional risks if
vendors do not meet ESG or cybersecurity standards.
○ Opaque Decision Processes: The complexity of AI algorithms, especially in deep
learning, often results in a lack of transparency and explainability, making it difficult for
stakeholders to understand how decisions are made. This “black box” nature of AI can
hinder accountability and trust in AI-driven ESG initiatives.
○ Accountability Challenges: In cases where AI-driven decisions lead to adverse ESG
outcomes, it can be difficult to attribute responsibility.
○ Compliance Complexity: Difficulty keeping up with fast-growing AI laws, regs and
standards, increasing the risk of inadvertent non-compliance.
○ Legal Uncertainties: Rapidly evolving AI technologies can outpace existing legal
frameworks, creating uncertainties about liability for collection, maintenance and use of
data, intellectual property rights, and other legal issues
● Understand the potential harms to an ecosystem (natural resources, environment, supply
chain). (source)
○ High Energy Consumption: The computation-intensive nature of training and running
AI, particularly large models, can lead to high energy consumption and significant carbon
footprints.
○ Life Cycle Impact of AI Hardware: HW lifecycle to manage AI (eg servers, data centers)
contribute to environmental concerns such as electronic waste and resource depletion.
● “Human Centric” AI Systems = AI systems that amplify/augment rather than displace human
abilities. Preserve human control that ensures AI meets our needs while operating transparently,
delivering equitable outcomes and respecting privacy.
● Accountable AI system characteristics = safe, secure/resilient, valid/reliable, fair.
○ Accountability: responsibility to ensure AI system is “ethical, fair, transparent and
compliant” and ensures the actions, decisions and outcomes of an AI system can be
traced back to the entity responsible for it.
● Transparent AI system = makes info available to stakeholders eg whether AI is used, how
model works (e.g., through model cards, system cards). Impt for explainability and accountability.
● Explainable AI system (XAI) = The ability to describe or provide sufficient information about how
an AI system generates a specific output or arrives at a decision in a specific context to a
predetermined addressee. Impt for transparency and trust.
● Privacy enhancing technologies (PET) = Tech approaches that allow for data collection /
processing / sharing while safeguarding privacy. They enable a relatively high level of utility from
data while minimizing the need for extensive data collection and processing.
○ Examples:
■ Homomorphic encryption (can compute without decrypting);
■ secure multi-party computation (SMPC; multiple parties to jointly compute while
keeping data secure from one another),
■ federated learning (ML that enables models across multiple decentralized
devices without data transfer);
■ Synthetic data (generates synthetic data with same stat properties and
correlations as real data but without PII. comes with risks).
■ differential privacy (add noise to make it hard to identify);
■ trusted execution environment (TEE; provide secure environment within
computer system where sensitive ops are executed away from main
process/memory).
II.C. Understand the similarities and differences among existing and emerging ethical guidance on AI
(2/100)
● FIPPS, ECHR and OECD principles.
○ Fair Information Practices (FIPPS) = collection of widely accepted principles that
agencies use when evaluating information systems, processes, programs, and activities
that affect individual privacy. Principles and not requirements.
■ Access and Amendment- Agencies should provide individuals with appropriate
access to PII and appropriate opportunity to correct or amend PII.
■ Accountability- Agencies should be accountable for complying with these
principles and applicable privacy requirements, and should appropriately monitor,
audit, and document compliance. Clear R&R for PII for employees/contractors.
Provide appropriate training.
■ Authority- Agencies should only create, collect, use, process, store, maintain,
disseminate, or disclose PII if they have authority to do so, and should identify
this authority in the appropriate notice.
■ Minimization- Agencies should only create, collect, use, process, store,
maintain, disseminate, or disclose PII that is directly relevant and necessary to
accomplish a legally authorized purpose, and should only maintain PII for as long
as is necessary to accomplish the purpose.
■ Quality and Integrity- Agencies should create, collect, use, process, store,
maintain, disseminate, or disclose PII with such accuracy, relevance, timeliness,
and completeness as is reasonably necessary to ensure fairness to the
individual.
■ Individual Participation- Agencies should involve the individual in the process
of using PII and, to the extent practicable, seek individual consent for the
creation, collection, use, processing, storage, maintenance, dissemination, or
disclosure of PII. Agencies should also establish procedures to receive and
address individuals’ privacy-related complaints and inquiries.
■ Purpose Specification and Use Limitation- Agencies should provide notice of
the specific purpose for which PII is collected and should only use, process,
store, maintain, disseminate, or disclose PII for a purpose that is explained in the
notice and is compatible with the purpose for which the PII was collected, or that
is otherwise legally authorized.
■ Security- Agencies should establish administrative, technical, and physical
safeguards to protect PII commensurate with the risk and magnitude of the harm
that would result from its unauthorized access, use, modification, loss,
destruction, dissemination, or disclosure.
■ Transparency- Agencies should be transparent about information policies and
practices with respect to PII, and should provide clear and accessible notice
regarding creation, collection, use, processing, storage, maintenance,
dissemination, and disclosure of PII.
○ European Court of Human Rights Rules on individual or State applications alleging
violations of the civil and political rights set out in the European Convention on Human
Rights.
○ OECD AI Principles Promotes AI that is innovative and trustworthy and that respects
human rights and democratic values. Value-based principles
■ Inclusive growth, sustainable development and well-being Stakeholders
should proactively engage in responsible stewardship of trustworthy AI in pursuit
of beneficial outcomes for people and the planet, such as augmenting human
capabilities and enhancing creativity, advancing inclusion of underrepresented
populations, reducing economic, social, gender and other inequalities, and
protecting natural environments, thus invigorating inclusive growth, sustainable
development and well-being.
■ Human-centred values and fairness AI actors should respect the rule of law,
human rights and democratic values, throughout the AI system lifecycle. These
Include freedom, dignity and autonomy, privacy and data protection,
non-discrimination and equality, diversity, fairness, social justice, and
internationally recognised labor rights. To this end, AI actors should implement
mechanisms and safeguards, such as capacity for human determination, that are
appropriate to the context and consistent with the state of art.
■ Transparency and explainability AI Actors should commit to transparency and
responsible disclosure regarding AI systems. To this end, they should provide
meaningful information, appropriate to the context, and consistent with the state
of art:
● to foster a general understanding of AI systems,
● to make stakeholders aware of their interactions with AI systems,
including in the workplace,
● to enable those affected by an AI system to understand the outcome,
● Contestability to enable those adversely affected by an AI system to
challenge its outcome based on plain and easy-to-understand
information on the factors, and the logic that served as the basis for the
prediction, recommendation or decision.
■ Robustness, security and safety AI systems should be robust (maintains
function even in changed/adversarial circumstances), secure and safe (designed
to minimize potential harm) throughout their entire lifecycle so that, in conditions
of normal use, foreseeable use or misuse, or other adverse conditions, they
function appropriately and do not pose unreasonable safety risk.
● To this end, AI actors should ensure traceability, including in relation to
datasets, processes and decisions made during the AI system lifecycle,
to enable analysis of the AI system’s outcomes and responses to inquiry,
appropriate to the context and consistent with the state of art.
● AI actors should, based on their roles, the context, and their ability to act,
apply a systematic risk management approach to each phase of the AI
system lifecycle on a continuous basis to address risks related to AI
systems, including privacy, digital security, safety and bias.
■ Accountability AI actors should be accountable for the proper functioning of AI
systems and for the respect of the above principles, based on their roles, the
context, and consistent with the state of art.
○ White House Office of Science and Technology Policy Blueprint for an AI Bill of
Rights
■ 5 principles:
● Safe and Effective Systems You should be protected from unsafe or
ineffective systems.
● Algorithmic Discrimination Protections You should not face
discrimination by algorithms and systems should be used and designed
in an equitable way
● Data Privacy You should be protected from abusive data practices via
built-in protections and you should have agency over how data about you
is used.
● Notice and Explanation You should know that an automated system is
being used and understand how and why it contributes to outcomes that
impact you.
● Human Alternatives, Considerations, and Fallback You should be
able to opt out, where appropriate, and have access to a person who can
quickly consider and remedy problems you encounter.
■ European Commission High-level Expert Group AI
● Deliverable 1: Ethics Guidelines for Trustworthy AI The document
puts forward a human-centric approach on AI and list 7 key requirements
that AI systems should meet in order to be trustworthy.
● Deliverable 2: Policy and Investment Recommendations for
Trustworthy AI Building on its first deliverable, the group put forward 33
recommendations to guide trustworthy AI towards sustainability, growth,
competitiveness, and inclusion. At the same time, the recommendations
will empower, benefit and protect European citizens.
● Deliverable 3: The final Assessment List for Trustworthy AI (ALTAI) A
practical tool that translates the Ethics Guidelines into an accessible and
dynamic self-assessment checklist. The checklist can be used by
developers and deployers (users) of AI who want to implement the key
requirements. This new list is available as a prototype web based tool
and in PDF format.
● Deliverable 4: Sectoral Considerations on the Policy and Investment
Recommendations The document explores the possible implementation
of the recommendations, previously published by the group, in three
specific areas of application: Public Sector, Healthcare and
Manufacturing & the Internet of Things.
○ UNESCO Principles
■ Goals: basis for AI systems to work for good of humanity and prevent harm. It
also aims at stimulating the peaceful use of AI systems.”
■ Method: Establish ethical frameworks as well as practical policy
recommendations with a strong emphasis on inclusion / ESG issues.
○ Asilomar AI Principles 23 principles divided into 3 categories developed at a
conference sponsored by the Future of Life Institute (nonprofit)
● IEEE
○ Eight general principles: human rights and well-being, transparency,
accountability, effectiveness, competence and “awareness of misuse” in addition
to “data agency,” giving individuals control over their data
○ Ethics-by-design approach
○ Working group in progress for new standard for AI systems
● CNIL AI Action Plan.
○ Understanding the functioning of AI systems and their impacts on people:
CNIL focusing on a few key AI privacy issues (e.g., public PII web scraping, AI
system user PII protection, DSAR for training AI systems).
○ Supporting innovative players in the AI ecosystem in France and Europe:
call for projects to participate in 2023 regulatory sandbox and dialogue with
developers / R&D centers.
○ Auditing and controlling AI systems: The CNIL plans to develop a tool to audit
AI systems and will continue to investigate complaints lodged with its office
related to AI, including generative AI.
III.A. Understand the existing laws that interact with AI use (6/100)
● Know the laws that address unfair and deceptive practices.
○ Federal Trade Commission (FTC) Act (US) (Wheeler-Lea Act of 1938)
○ EU Directive on unfair commercial practices from 2005
○ The Children's Online Privacy Protection Act (COPPA), which governs the collection of
information about minors
○ The Gramm Leach Bliley Act (GLBA), which governs personal information collected by
banks and financial institutions
○ Telemarketing Sales Rule (TSR), Telephone Consumer Protection Act of 1991, and the
Do-Not-Call Registry
○ Junk Fax Protection Act of 2005 (JFPA)
○ Controlling the Assault of Non-Solicited Pornography and Marketing Act of 2003
(CAN-SPAM) and the Wireless Domain Registry
○ Telecommunications Act of 1996 and Customer Proprietary Network Information (CPNI)
○ Cable Communications Policy Act of 1984
○ Video Privacy Protection Act of 1998 (VPPA) and Video Privacy Protection Act
Amendments of 2012
○ Driver's Privacy Protection Act (DPPA)
● Know relevant non-discrimination laws (credit, employment, insurance, housing, etc.).
○ FCRA regulates “consumer reporting agencies” and users of such reports. GenAI service
could meet this definition if the service regularly produces reports about individuals'
"character, general reputation, personal characteristics, or mode of living" and these
reports are used for employment purposes.”
○ Confidentiality of Substance Use Disorder Patient Records Rule Prohibits patient
information from being used to initiate criminal charges or as a predicate to conduct a
criminal investigation of the patient
○ Equal Credit Opportunity Act - prohibits discrimination against credit applicants on the
basis of race
○ Fair and Accurate Credit Transactions Act of 2009 (FACTA) contains protections against
identity theft, “red flags” rules
○ Privacy Protection Act of 1980 (PPA) The PPA requires law enforcement to obtain a
subpoena in order to obtain First Amendment-protected materials
○ Title VII of the Civil Rights Act of 1964 (“CRA”) prohibits employment discrimination on
the basis of race, color, religion, sex, or national origin
○ Title I of the Americans With Disabilities Act (“ADA”) prohibits employment
discrimination against “qualified” individuals with disabilities
○ Genetic Information Nondiscrimination Act of 2008 (GINA)
○ Illinois Artificial Intelligence Video Interview Act – Requires that any employer relying
on AI technology to analyze a screening interview must providing information to
candidates and obtain consent; must also report demographic data to the state to
analyze bias
○ Maryland HB 1202 – Prohibits the use of facial recognition technology in the hiring
process without consent of applicant
○ NYC Local Law 144 – A bias audit must be conducted on any use of automated
employment decision tools requires; notice must be provided to applicants and alternative
selection process must be provided. NO NOTICE OF BIAS AUDIT REQUIRED
○ The Wiretap Act
● Know relevant product safety laws.
○ Consumer Product Safety Act (CPSA) in 1972 for the purposes of protecting
consumers against the risk of injury due to consumer products, enabling consumers to
evaluate product safety, establishing consistent safety standards, and promoting research
into the causes and prevention of injuries and deaths associated with unsafe products.
○ The General Product Safety Regulation (GPSR) requires that all consumer products
on the EU markets are safe and it establishes specific obligations for businesses to
ensure it. It applies to non-food products and to all sales channels.
● Know relevant IP law.
○ USPTO yet to fully establish guidance on topic but generally recognize IP rights only for
human inventors. European Patent Office and EU IP Office similar.
● Understand the basic requirements of the EU Digital Services Act (transparency of
recommender systems).
○ In force since Feb ‘24. The DSA imposes obligations on all information society services
that offer an intermediary service to recipients who are located or established in the EU,
regardless of whether that intermediary service provider is incorporated or located within
the EU.
■ Transparency obligations: Advertising, user profiling, and recommender
systems
● Under Article 26, providers of online platforms must supply users with
information relating to any online advertisements on its platform so that
the recipients of the services can clearly identify that such information
constitutes an advertisement. Providers of online platforms are prohibited
from presenting targeted advertisements based on profiling using either
the personal data of minors or special category data (as defined in the
GDPR).
● Article 27 requires providers of online platforms that use
recommendation systems to set out in their T&Cs the main parameters
they use for such systems, including any available options for recipients
to modify or influence them. Under Article 38, VLOPs and VLOSEs must
provide at least one option (not based on profiling) for users to modify the
parameters used.
● Know relevant privacy laws concerning the use of data.
○ The Federal (US) Privacy Act of 1974 and the E-Government Act of 2002 require
agencies to address the privacy implications of any system that collects identifiable
information on the public
○ HIPAA - health info. HITECH (2009) increased HIPAA penalties and gave individuals
greater access rights.
○ FERPA - student edu records
○ Protection of Pupil Rights Amendment of 1978 (PPRA), which prevents the sale of
student information for commercial purposes
○ CCPA/CPRA, Virginia Consumer Data Protection Act (VCDPA), CPA, CTDPA, Montana’s
Consumer Data Privacy Act, Delaware Personal Data Privacy Act, Utah Consumer
Privacy Act (UCPA), Oregon Consumer Privacy Act (OCPA), Iowa’s Consumer Data
Protection Act (ICDPA), New Jersey Data Privacy Act (NJDPA), Indiana Consumer Data
Protection Act, Tennessee Information Protection Act, Texas Data Privacy and Security
Act (TDPSA)
III.B. Understanding key GDPR intersections (3/100)
● Understand automated decision making, data protection impact assessments, anonymization,
and how they relate to AI systems
○ Automated decision making
■ GDPR Art. 22 right not to be subj to decision solely on automated
decisionmaking (narrow exceptions - consent, protect rights, contract)
■ Impact on AI systems: many AI systems do just that
○ DPIA
■ GDPR Art. 35 mandates DPIA for high risk processing (systematic processing on
large scale, SPII etc). Needs to include data processing, purposes,
proportionality, risks/rights of data subjects, mitigations.
■ Impact on AI systems: DPIA may be required for training certain datasets. ≠
conformity assessment required.
○ Anonymization
■ GDPR no explicit reference to anonymization, but “anonymized data” would not
be PII. Anonymizing the training data can help to mitigate concerns by separating
the information from the person.
■ Article 10 of EU AI Act has de-biasing exception to GDPR ban on processing
sensitive data. (source)
● Understand the intersection between requirements for AI conformity assessments and
DPIAs.
○ EU AI Act requires conformity assessments for HRAIs to meet Articles 13-23 requirement
for HRAIs before bringing to market.
■ Requirements
● Art. 13 - data/data governance
● Art. 14 - human oversight
● Art. 15 - technical documentation
● Art. 16 - risk mgmt
● Art. 17 - incident reporting
● Art. 18 - risk mitigation techniques
● Art. 19 - record keeping
● Art. 20 - accuracy, robustness, cybersecurity
● Art. 21 - transparency and provision of information
● Art. 22 - prohibition of manipulation techniques
● Art. 23 - stakeholder consultation
■ GDPR explicitly mentioned when it comes to PII processing. HRAI may need
DPIA and CA.
DPIA CA
Human oversight GDPR Art. 22 - users have right EU AI Act Art. 14 - need to
to know about automated ensure effective human
decision, logic, challenge for all oversight over HRAIs. Help
AI systems ensure decisions fair,
transparent and unbiased.
Domain 4: Understanding the Existing and Emerging AI Laws and Standards (12/100)
IV.C Understand the similarities and differences among the major risk management frameworks and
standards (4/100)
● ISO 31000:2018 Risk Management – Guidelines.
○ “A management system is the framework of policies, processes and procedures
employed by an organization to ensure that it can fulfill the tasks required to achieve its
purpose and objectives.” (source)
○ Governance and culture; strategy and objective-setting; performance; information,
communications and reporting; and the review and revision of practices to enhance the
performance of the organization.
○ Emphasis on leadership endorsement and engagement, emphasis on organizational
governance, emphasis on iterative nature of risk management (regularly updating
processes and policies in response to new industry
developments)
V.A Understand the key steps in the AI system planning phase (2/100)
● Determine the business objectives and requirements.
● Determine the scope of the project.
● Determine the governance structure and responsibilities.
V.B Understand the key steps in the AI system design phase (2/100)
● Implement a data strategy that includes: Data gathering, wrangling, cleansing, labeling. Applying
PETs like anonymization, minimization, differential privacy, federated learning. Determine AI
system architecture and model selection (choose the algorithm according to the desired level of
accuracy and interpretability).
V.C Understand the key steps in the AI system development phase (2/100)
● Build the model.
● Perform feature engineering.
● Perform model training.
● Perform model testing and validation.
V.D Understand the key steps in the AI system implementation phase (2/100)
● Perform readiness assessments.
● Deploy the model into production.
● Monitor and validate the model.
● Maintain the model.
AI Governance system of policies, laws, regulations across international, national and organizational
levels. Helps stakeholders implement and oversee use of AI while mitigating for risks and ensuring AI
aligns with objectives / done responsibly and ethically
VI.A Ensure interoperability of AI risk management with other operational risk strategies (2/100)
● Ex. security risk, privacy risk, business risk.
○ “The AI models constitute valuable intellectual assets, demanding features that prevent
unauthorized access or tampering.” (source)
○ “Depending on the sector—such as healthcare or finance—the stack must be compliant
with industry-specific regulations like HIPAA or PCI-DSS” (source)
● Determine if you are a developer, deployer (those that make an AI system available to third
parties) or user; understand how responsibilities among companies that develop AI systems and
those that use or deploy them differ; establish governance processes for all parties; establish
framework for procuring and assessing AI software solutions.
● Establish and understand the roles and responsibilities of AI governance people and groups
including, but not limited to, the chief privacy officer, the chief ethics officer, the office for
responsible AI, the AI governance committee, the ethics board, architecture steering groups, AI
project managers, etc.
● Advocate for AI governance support from senior leadership and tech teams by: ○ Understanding
pressures on tech teams to build AI solutions quickly and efficiently.
○ Understanding how data science and model operations teams work.
○ Being able to influence behavioral and cultural change.
● Establish organizational risk strategy and tolerance.
● Develop central inventory of AI and ML applications and repository of algorithms.
● Develop responsible AI accountability policies and incentive structures.
● Understand AI regulatory requirements.
● Set common AI terms and taxonomy for the organization.
● Provide knowledge resources and training to the enterprise to foster a culture that continuously
promotes ethical behavior.
● Determine AI maturity levels of business functions and address insufficiencies.
● Use and adapt existing privacy and data governance practices for AI management.
● Create policies to manage third party risk, to ensure end-to-end accountability.
● Understand differences in norms/expectations across countries.
● Evaluate the trustworthiness, validity, safety, security, privacy and fairness of the AI system using
the following methods:
● Use edge cases, unseen data, or potential malicious input to test the AI models. • Conduct
repeatability assessments.
● Complete model cards/fact sheets.
○ Validation tool that lays out conditions, edge cases and unseen data to serve as
documentation of AI tooll’s credibility
● Create counterfactual explanations (CFEs).
● Conduct adversarial testing and threat modeling to identify security threats.
● Refer to OECD catalog of tools and metrics for trustworthy AI.
● Establish multiple layers of mitigation to stop system errors or failures at different levels or
modules of the AI system.
● Understand trade-offs among mitigation strategies.
● Apply key concepts of privacy-preserving machine learning and use privacy-enhancing
technologies and privacy-preserving machine learning techniques to help with privacy protection
in AI/ML systems.
● Understand why AI systems fail. Examples include: brittleness; hallucinations; embedded bias;
catastrophic forgetting; uncertainty; false positives.
● Determine degree of remediability of adverse impacts.
● Conduct risk tracking to document how risks may change over time.
● Consider, and select among different deployment strategies.
● Use same features for training and testing data
Other Notes:
One of the findings from the IAPP Privacy and Consumer Trust Report concerned a set of behaviors
referred to as privacy self-defense. These include deciding against an online purchase, deleting a
smartphone app or avoiding a particular website due to privacy concerns. When consumers lose trust in
how their data is being collected and used, they are more likely to engage in these self-defensive
behaviors to protect their privacy.
The European Commission signed off on its Ethical Guidelines for Trustworthy AI which outlines “four
principles or “ethical imperatives” call for AI systems to respect human autonomy, prevent harm,
incorporate fairness and enable explicability. Another layer of guidance advises that AI respect human
dignity, individual freedom, democracy, justice, the rule of law, equality, non-discrimination, solidarity and
citizens’ rights.” (source)
SB 1047
Applies to frontier models (exceeding certain computational thresholds, capable of generating text, code,
audio or visual content) that is accessible to CA users.
Frontier models
● Thresholds still being discussed
● GPT-4, PaLM 2, DALL-E 2, Stable Diffusion, Codex
Key obligations:
● Registration - register frontier AI models with CA Dept of Technology
● Risk Assessment - comprehensive risk assessment must be conducted to identify harms/biases
● Mitigation - developers must implement measures to mitigate identified risks and biases such as
○ Data quality and diversity checks
○ Bias detection and correction
○ Transparency about model limitations
● User notification - users must be informed about potential risks/limitations
● Compliance monitoring - systems to monitor and ensure compliance
Open Source
● Challenges
○ Data provenance, transparency, legality etc
○ Evaluation (testing/monitoring)
○ Domain expertise