0% found this document useful (0 votes)
25 views

AI Risk Assessment Template

The document outlines an AI risk assessment framework for a generative AI tool, detailing its proposed use-case, functions, and various risk factors including operational impacts, fairness, and legal compliance. It emphasizes the importance of assessing benefits, risks, community impact, and alignment with legal frameworks, while also addressing potential harms and the need for transparency and accountability. The assessment includes specific considerations for data selection, privacy, and the implications for diverse populations, ensuring a comprehensive evaluation of the AI tool's deployment.

Uploaded by

News Toss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
25 views

AI Risk Assessment Template

The document outlines an AI risk assessment framework for a generative AI tool, detailing its proposed use-case, functions, and various risk factors including operational impacts, fairness, and legal compliance. It emphasizes the importance of assessing benefits, risks, community impact, and alignment with legal frameworks, while also addressing potential harms and the need for transparency and accountability. The assessment includes specific considerations for data selection, privacy, and the implications for diverse populations, ensuring a comprehensive evaluation of the AI tool's deployment.

Uploaded by

News Toss
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Contents

Generative AI Tool and proposed use-case ...............................................................................3


Details of request and assessment ............................................................................................3
Functions performed by AI tool/use-case............................................................................................ 4
Operational AI ............................................................................................................................4
Automated Decision Making......................................................................................................4
General assessment ........................................................................................................................... 5
General benefits assessments ...................................................................................................5
General risk factor assessment ..................................................................................................5
Community benefit ............................................................................................................................ 6
Alignment with legal frameworks ....................................................................................................... 7
Risk factors for individuals or communities ......................................................................................... 8
Possible Harms – Significant and irreversible ............................................................................9
Possible Harms – Reversible ......................................................................................................9
Possible harms – secondary or cumulative..............................................................................10
Fairness: Risk factors ........................................................................................................................ 11
Fairness – Data selection .........................................................................................................11
Fairness – Data availability and quality....................................................................................12
Fairness – Data representative of population .........................................................................12
Fairness – Diversity and inclusion ............................................................................................12
Fairness – Performance indicators ..........................................................................................13
Fairness – Monitor performance .............................................................................................13
Sensitive data considerations ........................................................................................................... 14
Privacy and security .................................................................................................................15
Privacy and security – Impact assessment...............................................................................15
Privacy and security – Consent ................................................................................................15
Privacy and security – NSW Cyber Security Policy ...................................................................16
Privacy and security – Sensitive data subjects.........................................................................16
Transparency: Risk factors ................................................................................................................ 17
Transparency – Consultation ...................................................................................................17
Transparency – Publicise use of tool .......................................................................................18
Transparency – Appeal an AI informed decision .....................................................................18
Transparency – Explainability of decisions ..............................................................................18
Accountability: Risk factors .............................................................................................................. 19
Accountability – Responsibilities .............................................................................................19
Accountability – Rollback processes ........................................................................................20
Procurement .................................................................................................................................... 20
Overall Assessment .......................................................................................................................... 21

AI Risk Assessment Page 2


Generative AI Tool and proposed use-case
Name of Generative AI tool

Name of the tool’s


developer/owner/vendor

Description of proposed use-case

Details of request and assessment


Requested by: Name

Branch/Unit

Consultation: Legal

ICT

Governance
and Risk

Other

Assessed by: Name Date:

Branch/Unit

Approved? Yes No

Reasons:

Name Date:

AI Risk Assessment Page 3


Functions performed by AI tool/use-case
Operational AI
1. Would the Generative AI tool/use case constitute ‘operational AI’?
Operational AI systems are those that have a real-world effect. The purpose is to generate an action, either prompting a human to act, or
the system acting by itself. Operational AI systems often work in real time (or near real time) using a live environment for their source
data.
Non-operational AI systems do not use a live environment for their source data. Most frequently, they produce analysis and insight from
historical data.
Comments:

Document your reasons.


Yes

Document your reasons.


No

Automated Decision Making


2. Will the use-case use real-time or near real-time data to:
– make recommendations for staff to act on in real-time or near real-time or
– take actions itself in real-time or near real-time.
Comments:

Yes. The decisions Do not proceed without advice from Legal,


include high or very high- Governance and Risk Branch. If the use-case proceeds,
risk factors, e.g. AI pilot first with ongoing controls and monitoring. A
makes and implements formal review should be conducted after pilot phase
operational decisions with oversight from Legal, Governance and Risk
that can negatively Branch.
affect human wellbeing
autonomously of human
input.

Yes. The decisions Do not proceed without advice from Legal,


include medium risk Governance and Risk Branch. Pilot first with ongoing
factors, e.g. AI generates controls and monitoring required once pilot
operational insights, commences.
decisions or
recommendations for
human to action with
some potential for harm.

Yes. The decisions The use-case can proceed with appropriate ongoing
include low risk factors controls and monitoring. Pilot the use-case first.
e.g. AI generates insights
or alerts for operational
human use with minimal
potential for harm.

No. Relies on historical The use-case can proceed, but you need to review
data. However, outputs your risk treatments and make sure there are
may generate insights sufficient controls in place.
for non-operational
human use from non-
sensitive data.

No. Relies on historical The use-case can proceed with appropriate ongoing
data for reporting or controls and monitoring.
informing purposes only.

AI Risk Assessment Page 4


General assessment
General benefits assessments
Consider the benefits associated with the use-case Insignificant Minor Moderate Major Extensive

Deliver an existing service or outcome to a higher


standard/quality (e.g. accuracy or client satisfaction).

Reduce processing or delivery times.

Generate financial efficiencies or savings.

Deliver a new service or outcome (particularly if it


cannot be done without using AI).

Enable future innovations to existing services, or new


services or outcomes.

Comments
Please include your overall assessment of the general benefits and the rationale for your assessment.

General risk factor assessment


Consider the inherent risks1 associated with Insignificant Minor Moderate Major Severe

The use-case delivering a new or existing service.

The potential to cause discrimination from unintended


bias.

The use-case is a single point of failure for your service


or policy.

Insufficient experienced human oversight of the use-


case.

Over-reliance on the use-case or ignoring the system


due to high rates of false alert.

The linkage between operating the use-case and the


strategic plan outcomes is clear.

Comments
Please include your overall assessment of the general risk and the rationale for your assessment.

1 Refer to risk ratings in Appendix 3 of the Office’s Risk Management Framework.

AI Risk Assessment Page 5


Community benefit
3. Will the use-case improve on existing approaches to deliver the outcomes described in the Office’s:
 Enabling legislation
 Strategic plan
 Transformation program.
Comments:

Document your reasons. Go to the next


Yes
question.

Conduct a formal benefits review before


Partially scaling the use-case. Document your
reasons and go to the next question.

Do not proceed any further. Discuss the


No use-case with Legal, Governance and Risk
Branch.

4. Were other non-AI systems considered?


Comments:

Document your reasons, then go to next


Yes
question.

Conduct a formal benefits review before


Informally scaling the use-case. Document your
reasons and go to the next question.

Do not proceed any further. Discuss the


No use-case with Legal, Governance and Risk
Branch.

AI Risk Assessment Page 6


Alignment with legal frameworks
5. Does the use-case and the use of data align with relevant legislation?
You must make sure your data use aligns with:
– Privacy and Personal Information Protection Act 1997 (NSW) (PPIPA)
– NSW Anti-Discrimination Act 1977
– Government Information (Public Access) Act 2009 (GIPA)
– State Records Act 1998
Other relevant NSW or Commonwealth Acts including:
– Public Interest Directions made under PIPPA (exemptions)
– Health Records and Information Privacy Act 2002 (NSW) (HRIPA)
– Health Public Interest Directions made under HRIPA (exemptions)
– Public Health Act 2010
– Ombudsman Act 1974
Comments:

Document your reasons. Go to next


Yes
question.

Pause the use-case. Seek advice from Legal,


Unclear
Governance and Risk Branch.

Do not proceed any further unless you


receive advice from Legal, Governance and
No
Risk Branch that allows the use-case to
proceed. Consider redesigning the use-case.

AI Risk Assessment Page 7


Risk factors for individuals or communities
Consider the risks of the AI tool Insignificant Minor Moderate Major Severe
resulting in:

Physical harms

Unfair Treatment

Providing poor or the wrong


services

Reducing processing or delivery


times

Environmental harms or harms to


the broader community

Unauthorised use of health or


sensitive personal information (SIP)

Impact on right, privilege or


entitlement

Unintended identification or
misidentification of an individual

Misapplication of a fine or penalty

Other financial or commercial


impact

Incorrect advice or guidance

Inconvenience or delay

Other harms

Comments
Please include your overall assessment of the risks and the rationale for your assessment.

AI Risk Assessment Page 8


Possible Harms – Significant and irreversible
6. Considering planned mitigations, could the use-case cause significant or irreversible harms?
If there is a residual risk of significant or irreversible harms and the use-case proceeds, you must pilot the use-case first, then conduct
a formal benefits review before scaling the use-case.
For more information on when a Human Rights Impact Assessment is required see https://humanrights.gov.au/
Comments:

Document your reasons, then go to next question.


No

Yes, but You must have Legal, Governance and Risk Branch
it’s better advice that allows this use-case to proceed.
than Consult with Chief Executive Board. Consider a Human
existing Rights Impact Assessment.
systems

Do not proceed any further unless you receive Legal,


Governance and Risk Branch advice that allows the
Yes use-case to proceed.
Consult with Chief Executive Board. Consider a Human
Rights Impact Assessment.

Possible Harms – Reversible


7. Considering planned mitigations, could the use-case cause reversible harms?
If there is a residual risk of mid-range (or higher) harms and the use-case proceeds, you must pilot the use-case first before scaling
the use-case.
Comments:

Document your reasons, then go to next question.


No

Yes, but it’s You must have Legal, Governance and Risk Branch
better than advice that allows this use-case to proceed.
existing Consult with Chief Executive Board. Consider a Human
systems Rights Impact Assessment.

Do not proceed any further unless you receive Legal,


Governance and Risk Branch advice that allows the
Yes use-case to proceed.
Consult with Chief Executive Board. Consider a Human
Rights Impact Assessment.

AI Risk Assessment Page 9


Possible harms – secondary or cumulative
8. Considering planned mitigations, could the use-case result in secondary (or follow-on) harms, or result in a cumulative harm from
repeated application of the use-case?
If there is a residual risk of mid-range (or higher) harms and the use-case proceeds, you must pilot the use-case first before scaling
the use-case.
Comments:

Document your reasons. Go to next question.


No

Yes, but it’s You must have Legal, Governance and Risk Branch
better than advice that allows this use-case to proceed.
existing Consult with Chief Executive Board. Consider a Human
systems Rights Impact Assessment.

Do not proceed any further unless you receive Legal,


Governance and Risk Branch advice that allows the
Yes use-case to proceed.
Consult with Chief Executive Board. Consider a Human
Rights Impact Assessment.

AI Risk Assessment Page 10


Fairness: Risk factors
Note: When using this matrix to assess open-source generative AI tools, ‘data’ may more appropriately refer to the data the tool was trained on.
Answer according to how much you know about the training data. Minority populations and vulnerable groups may be underrepresented in
training data sets, therefore risks might increase if little is known about the tool’s training data.

Consider the risks associated with: Insignificant Minor Moderate Major Severe

Using incomplete or inaccurate data

Having poorly defined descriptions and indicators of


“Fairness”

Not ensuring ongoing monitoring of “Fairness


indicators”

Decisions to exclude outlier data

Informal or inconsistent data cleansing and repair


protocols and processes

Using informal bias detection methods (best practice


includes automated testing)

The likelihood that re-running scenarios could


produce different results (reproducibility)

Inadvertently creating new associations when linking


data and/or metadata

Differences in the data used for training compared to


the data for intended use

Comments
Please include your overall assessment of the risks and the rationale for your assessment.

Fairness – Data selection


9. Can you explain why you / the vendor selected this data for this use-case and not others?
Comments:

Yes Document your reasons. Go to next question.

Consult with Legal, Governance and Risk Branch and


Chief Executive Board to identify alternative data
Unclear
sources or implement a data improvement strategy or
redesign the use-case.

It’s better
Document your reasons. You should clearly demonstrate
than
that you have consulted with Legal, Governance and Risk
existing
Branch and Chief Executive Board before proceeding.
systems

Pause the use-case and consider how absent data or


No
poor-quality data will impact your system.

AI Risk Assessment Page 11


Fairness – Data availability and quality
10. Is the data that you need for this use-case available and of appropriate quality given the potential harms identified?
If the use-case is a data creation or data cleansing application, answer according to the availability of any existing data that is needed
for the use-case to succeed, for example, training datasets.
Comments:

Yes Document your reasons, then go to next question.

Consult with Legal, Governance and Risk Branch and


Chief Executive Board to identify alternative data
Unclear
sources or implement a data improvement strategy or
redesign the use-case.

It’s better Document your reasons. You should clearly demonstrate


than that you have consulted with Legal, Governance and Risk
existing Branch and Chief Executive Board before proceeding to
systems pilot phase.

Pause the use-case and consider how absent data or


No
poor-quality data will impact your system.

Fairness – Data representative of population


11. Does your data reflect the population that will be impacted by your use-case?
Comments:

Yes Document your reasons, then go to next question.

Not
entirely, but You should clearly demonstrate that you have consulted
it’s better with Legal, Governance and Risk Branch and Chief
than Executive Board before proceeding to pilot phase.
existing Consider a Human Rights Impact Assessment
systems

No or Pause the use-case and address the gaps in your solution


unclear design.

Document your reasons as to why this does not apply,


N/A
then go to next question.

Fairness – Diversity and inclusion


12. Have you considered how your use-case will address issues of diversity and inclusion (including geographic diversity)?
13. Have you considered the impact with regard to gender and on minority groups including how the solution might impact different
individuals in minority groups when developing this use-case?
Minority groups may include:
– those with disability
– LGBTQIA+ and gender fluid communities
– people from CALD backgrounds
– Aboriginal and Torres Strait Islanders
– children and young people
Comments:

Yes Document your reasons, then go to next question.

Not
entirely, but You should clearly demonstrate that you have consulted
it’s better with Legal, Governance and Risk Branch and Chief
than Executive Board before proceeding to pilot phase.
existing Consider a Human Rights Impact Assessment
systems

No or Pause the use-case and address the gaps in your solution


unclear design.

AI Risk Assessment Page 12


Document your reasons as to why this does not apply,
N/A
then go to next question.

Fairness – Performance indicators


14. Do you have appropriate performance measures and targets (including fairness ones) for your use-case, given the potential
harms?
Aspects of accuracy and precision are readily quantifiable for most systems which predict or classify outcomes. This performance can
be absolute, or relative to existing systems.
How would you characterise “Fairness” such as equity, respect, justice, in outcomes from a use-case? Which of these relate to, or are
impacted by the use of AI?
Comments:

Yes Document your reasons, then go to next question.

For operational AI systems, pause the use-case until you


No or have established performance measures and targets.
unclear For non-operational systems, results should be treated as
indicative and not relied on.

Document your reasons as to why this does not apply, then


N/A
go to next question

Fairness – Monitor performance


15. Do you have a way to monitor and calibrate the performance (including fairness) of your use-case?
Operational AI systems which are continuously updated / trained can quickly move outside of performance thresholds. Supervisory
systems can monitor system performance and alert when calibration is needed.
Comments:

Yes Document your reasons, then go to next question.

For operational AI systems, pause the use-case until you


No or have established performance measures and targets.
unclear For non-operational systems, results should be treated as
indicative and not relied on.

Document your reasons as to why this does not apply, then


N/A
go to next question.

AI Risk Assessment Page 13


Sensitive data considerations
The Office handles sensitive data of the types outlined below. In accordance with the Generative AI policy, officers must not enter any sensitive
data into any open-source generative AI tools. When assessing this question for open-source generative AI tools, consider the consequences of
officers entering sensitive data into the open-access generative AI tool.

Do you use sensitive data, including Identifiable Identifiable Identifiable High Identifiable
information on: cohort >50 cohort cohort Identifiable cohort <5
>20 and >10 and cohort
or N/A
<50 <20 >5 and <10

Children

Religious individuals

Racially or ethnically diverse individuals

Individuals with political opinions or


associations

Individuals with trade union memberships or


associations

Gender and/or sexually diverse individuals

Individuals with a criminal record

Specific health or genetic information

Personal biometric information

Other sensitive person-centred data

Comments
Please include your overall assessment of the risks and the rationale for your assessment.

AI Risk Assessment Page 14


Privacy and security
Note: This question may not apply for generative AI tools that can be adopted ‘off the shelf’. If this is the case, it may be more appropriate to
check with the vendor how the product observes privacy and security principles and answer accordingly.

16. Have you applied the “Privacy by Design” and “Security by Design” principles in your use-case?
Comments:

Document your reasons, then go to next question.


Yes

Pause the use-case and determine how you will


Partially
improve your data or practices.

Pause the use-case until you have received advice


No or
from Legal, Governance and Risk Branch. You may
unclear
need to adjust the proposed tool/use-case.

Privacy and security – Impact assessment


17. Have you completed a privacy impact assessment (either third party or self-assessed)?
Comments:

Yes Document your reasons, then go to next question.

Pause the use-case until you have completed a


No
privacy impact assessment.

Privacy and security – Consent


18. If you are using information about individuals who are reasonably identifiable, have you sought consent from citizens about using
their data for this particular purpose?
See the NSW Privacy and Personal Information Protection Act (1998) for a definition of Personal Information.
See also the NSW Privacy Commissioner’s fact sheet on Reasonably Ascertainable Identity
Comments:

Document your reasons, then go to next question.


Yes

For AI systems intended to operate under


legislation which allows use of Identifiable
Information, do not proceed unless you receive
Authorised
Legal, Governance and Risk Branch advice that
use
allows this use-case to proceed. The use-case
should be carefully monitored for harms during
the pilot phase.

Pause the use-case until you have consent, or


Partially
redesign your use-case.

Pause the use-case until you have either consent


No or Legal, Governance and Risk Branch advice
authorising use of this information.

Document your reasons as to why this does not


N/A
apply, then go to next question.

AI Risk Assessment Page 15


Privacy and security – NSW Cyber Security Policy
19. Does the use-case adhere to the requirements in the NSW Cyber Security Policy?
Have you considered end-to-end Security Principles for your use-case?
Comments:

Document your reasons, then go to next question.


Yes

No or Pause the use-case until these requirements can be


partially met.

Document your reasons as to why this does not


N/A
apply, then go to next question.

Privacy and security – Sensitive data subjects


The Office handles sensitive data of the types outlined below. In accordance with the Generative AI policy, officers must not enter any sensitive
data into any open-source generative AI tools. When assessing this question, consider the consequences of staff entering sensitive data into the
open-access generative AI tool.

20. Does your dataset include using sensitive data subjects as described by section 19 of the NSW Privacy and Personal Information
Protection Act 1998?
Comments:

Document your reasons, then go to next question.


No

Seek explicit approval from Legal, Governance and


Yes
Risk Branch to proceed with this risk.

Pause the use-case and clarify the nature of the


Unclear data, address any inadvertent use of sensitive data
in the use-case.

AI Risk Assessment Page 16


Transparency: Risk factors
Consider the inherent risks associated with: Insignificant Minor Moderate Major Severe

Incomplete documentation of use-case design, or


implementation, or operation

No or limited access to model’s internal workings or


source code (“Black Box”)

Being unable to explain the output of a complex


model

A member of the public being unaware that they are


interacting with a use case

No or low ability to incorporate user feedback into


the use-case

Comments
Please include your overall assessment of the risks and the rationale for your assessment.

Transparency – Consultation
You must consult with the relevant community when designing an AI system. This is particularly important for operational AI systems.

Communities have the right to influence government decision-making where those decisions, and the data on which they are based, will have an
impact on them.

For use-cases intended to operate under legislation which allows use without community consultation, the public benefits must be clear before
proceeding to pilot phase.

21. Have you consulted with the relevant community that will benefit from (or be impacted by) the use-case?
Comments:

Document your reasons, then go to next question.


Yes

For use-cases intended to operate under legislation


which allows use without community consultation, do
Authorised not proceed unless you receive Legal, Governance and
use Risk Branch advice that allows this use-case to proceed.
The use-case should be carefully monitored for harms
during the pilot phase.

No, but it's Document your reasons. You should clearly demonstrate
better that you have consulted with Legal, Governance and Risk
than Branch and Chief Executive Board before proceeding to
existing pilot phase.
systems

Pause the use-case develop a Community Engagement


No
Plan2 and consult with the relevant community.

Document your reasons as to why this does not apply,


N/A
then go to next question.

2 A Community Engagement Plan should demonstrate: objectives and planned outcomes, how the public can question and seek reviews of AI-based decision, how the community can get

insights into data use and methodology, how the community will be informed of changes to an AI solution, including where existing technology is adapted for another purpose. Source:
https://www.digital.nsw.gov.au/policy/artificial-intelligence/artificial-intelligence-ethics-policy/mandatory-ethical-principles

AI Risk Assessment Page 17


Transparency – Publicise use of tool
22. Is the scope of the office’s use of the use-case publicly available?
Comments:

Document your reasons, then go to next question.


Yes

Make sure you communicate the scope and goals of the


use-case to Legal, Governance and Risk Branch and Chief
No
Executive Board and the relevant community who are
impacted before proceeding beyond pilot.

Document your reasons as to why this does not apply,


N/A
then go to next question.

Transparency – Appeal an AI informed decision


23. Is there an easy and cost-effective way for people to appeal a decision that has been informed by your use-case?
Comments:

Document your reasons, then go to next question.


Yes

Pause your use-case, consult with Legal, Governance and


No Risk Branch and Chief Executive Board and establish an
appeals process.

Document your reasons as to why this does not apply,


N/A
then go to next question.

Transparency – Explainability of decisions


24. Does the use-case allow for transparent explanation of the factors leading to the AI decision or insight?
Comments:

Document your reasons, then go to next question.


Yes

No, but a Consult with Legal, Governance and Risk Branch and
person Chief Executive Board and establish a process to
makes the readily reverse any decision or action made by the
final use-case. Actively monitor for potential harms during
decision pilot phase.

Pause your use-case, consult with Legal, Governance


and Risk Branch and Chief Executive Board and
No
establish a process to readily reverse any decision or
action made by the use-case.

Document your reasons as to why this does not apply,


N/A
then go to next question.

AI Risk Assessment Page 18


Accountability: Risk factors
Consider the inherent risks associated with: Insignificant Minor Moderate Major Severe

Insufficient training of use-case operators

Insufficient awareness of use-case limitations of


Chief Executive Board

No or low documentation of performance targets or


“Fairness” principles trade-offs

No or limited mechanisms to record use-case


decision history

The inability of third parties to accurately audit AI


system insights / decisions

Comments
Please include your overall assessment of the risks and the rationale for your assessment.

Accountability – Responsibilities
25. Have you established who is responsible for:
– use of the AI insights and decisions
– policy/outcomes associated with the use-case
– monitoring the performance of the use-case
– data governance?
Comments:

Document your reasons, then go to next question.


Yes

Pause the use-case while you identify who is responsible and


No or
make sure they are aware and capable of undertaking their
unclear
responsibilities.

Document your reasons as to why this does not apply, then


N/A
go to next question.

AI Risk Assessment Page 19


Accountability – Rollback processes
26. Have you established clear processes to:
– intervene if a relevant stakeholder finds concerns with insights or decisions?
– ensure you do not get overconfident or over reliant on the use-case?
Comments:

Document your reasons, then go to next question.


Yes

Pause your use-case, consult with Legal, Governance and Risk


No Branch and Chief Executive Board and establish appropriate
processes.

Document your reasons as to why this does not apply, then


N/A
go to next question.

Procurement
27. If you are procuring all or part of a use-case, have you satisfied the above requirements for:
– transparency
– privacy and security
– fairness
– accountability
As defined in the NSW AI Assurance Framework?
Comments:

Document your reasons, then go to next question.


Yes

Pause your use-case. Make sure you can meet the


No
requirements before you continue.

AI Risk Assessment Page 20


Overall Assessment
Community Benefit Fairness

AI should deliver the best outcome for the citizen, and key Use of AI will include safeguards to manage data bias or data
insights into decision making. quality risks, following best practice and Australian Standards.

Highest risk Highest risk

No. of Risks No. of Risks

Privacy and Security Transparency

AI will include the highest levels of assurance. Ensure use-cases Review mechanisms will ensure citizens can question and
adhere to PPIPA. challenge AI based outcomes. Ensure use-case adhere to GIPA
Act.

Highest risk Highest risk

No. of Risks No. of Risks

Accountability

Decision-making remains the responsibility of organisations and


Chief Executive Board.

Highest risk

No. of Risks

Does the overall risk assessment indicate the use-case involving AI (or other form of automated decision-making technology) can be
implemented?
Comments:

Document your reasons.


Yes

Document your reasons, including


Yes, but only with
the further safeguards and controls
further safeguards and
(and who is responsible for
controls
oversighting their implementation).

No, not without Document your reasons, including


further investigation whether someone will be assigned to
of safeguards and conduct further investigation.
controls

AI Risk Assessment Page 21

You might also like