AI Act
AI Act
Artificial Intelligence
Act
Overview
• Background- Historic Timeline, OECD AI principles
• What is the AIA? What does it aim to do?
• Subject Matter of the AIA
• What is an ‘AI system’ under the AIA?
• Actors under the AI Act: Providers, Deployers, Importer, Operator
• Scope of the AIA: Extraterritorial effect
• GPAI models and obligations of GPAI model providers
Introduction to the AI Act
• The European Union’s AI act is a European union regulation concerning AI and
establishes a common regulatory legal framework for AI within the European
Union.
• The AI act came into force on the 1st of August 2024.
• The AI act places risk and technology-based obligations on organizations that
develop, use, distribute or import AI systems in the EU, coupled with high fines
for non-compliance.
• The AI act generally adopts a risk-based approach to the deployment and use of
AI systems. AI systems deemed to pose an “unacceptable” level of risk are
outright banned and other AI systems are placed within a “risk tier” – with
corresponding levels of compliance obligations.
EU’s path towards effective AI
regulation
• In April 2018, the European Commission outlined the European
approach to boost investment and set ethical guidelines for the
regulation of AI.
• The European Commission proposed a three-pronged approach to
increase public and private investment in AI, prepare for socio
economic changes and to ensure an appropriate ethical and legal
framework.
• In June 2018, the Commission appointed an expert group on AI and
launched the European AI Alliance.
• In 2019, these bodies(HLEG and European AI Alliance) collaborated to
publish the ethics guidelines for trustworthy AI.
• The High-Level Expert Group on AI wrote the guidelines, after
consulting with the members of the European AI Alliance, which is a
multi-stakeholder forum created to provide feedback on regulatory
initiatives regarding AI
European Commission's Guidelines
for Trustworthy AI
• According to the guidelines, trustworthy AI should be lawful, ethical and robust.
• These Guidelines set out a framework for achieving Trustworthy AI.
• The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI).
• Instead, the guidelines aimed to offer guidance on the second and third components: fostering
and securing ethical and robust AI.
• These Guidelines sought to go beyond a list of ethical principles, by providing guidance on how
such principles can be operationalised.
• These guidelines laid down seven key ethical requirements that AI systems should meet to be
trustworthy:
Human agency and oversight, technical robustness and safety, privacy and data governance,
transparency, diversity, non-discrimination and fairness, societal and environmental well-being,
and accountability.
• In February 2020, the European Commission built on these guidelines through its
White Paper titled “On Artificial Intelligence: A European Approach to Excellence
and Trust”.
• The White Paper announced upcoming regulatory action and presented certain
key elements of the future legal framework. Among these key elements was the
the risk-based approach suggesting that mandatory legal requirements, derived
from the ethical principles, should be imposed on certain AI Systems.
• Further, the White Paper was followed by a public consultation process that
involved many stakeholders from various backgrounds which influenced the
drafting of the AI Act
In April 2021, the European Commission published a proposal to
regulate the AI Act. The Commission released a proposal for a
regulation aiming to harmonise rules on artificial intelligence (AI act)
and a coordinated plan which includes a set of joint actions for the
Commission and member states.
By 2031:
Twenty-Four months after Thirty-six months after entry Thirty-six months after entry Commission will carry out an
entry into force: into force: into force: assessment of enforcement of
Obligations go into effect for Obligations will go into effect for Obligations will come into effect this regulation and shall report
high-risk AI systems specifically high-risk AI systems which are for high-risk AI systems not it to the European Parliament,
listed in Annex III not prescribed in Annex III mentioned in Annex III the Council and the European
Economic and Social Committee
Risk Classification under AI Act
An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives,
how to generate outputs such as predictions, content, recommendations, or decisions that can influence
physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after
deployment.
Differences between an AI model and an AI
system
• When performed during the build phase, inference, in this sense, is often used to
evaluate a version of a model, particularly in the machine learning context.
- Putting AI into service: refers the supply of an AI System for first use
directly to the Deployer or for own use in the EU market; and
• (7) Distributor: ‘distributor’ means a natural or legal person in the supply chain,
other than the provider or the importer, that makes an AI system available on the
Union market;
• GPAI models are foundation models are are trained on large datasets and can be
used for many different tasks without much fine tuning.
General Purpose AI models
• Article 3(63)
“AI model, including when trained with a large amount of data using self-supervision at scale, that displays
significant generality and is capable to competently perform a wide range of distinct tasks regardless of the
way the model is placed on the market and that can be integrated into a variety of downstream systems or
applications.”
Key Characteristics of GPAI models
Generality - GPAI models can perform a wide range of distinct tasks. GPAI models are also
often referred to as Foundation Models. They have purposefully been designed to perform a
wide range of tasks and to easily adapt to new situations.
To this end, they are trained on very broad sets of unlabelled data and can be used for many
different tasks without much fine-tuning. This also means that they can be used in and
adapted to a wide range of applications for which they were not originally, intentionally or
specifically designed .
Training data - The models are typically trained on large datasets through various methods,
such as self-supervised, unsupervised, or reinforcement learning.
Integration - While essential, these models alone do not constitute AI systems. They require
additional components, such as user interfaces, to be integrated into various downstream
systems or applications.
GPAI Systems
• GPAI models require the addition of further components, such as a user interface,
to become AI systems. When fitted with a user interface and placed on the
market for non-high-risk purposes, they usually become a ‘GPAI system’ meaning
an AI system which is based on a GPAI model and which has the capability to
serve a variety of purposes, both for direct use as well as for integration in other
AI systems ( Article 3(66) AI Act).
• AI model, including a GPAI model, can be an essential part of an AI system (in the
definition referred to as “downstream systems”), but does not constitute such
system on its own. (For instance- Large language models)
Risk Classification for GPAI under AI
Act
• The AI Act divides GPAI models into two categories. The first category is simply
those models that count as GPAI; the second category applies to a subset of these
models that are identified as posing a systemic risk.
• GPAI models that entail systemic risk are deemed to have high impact capabilities
due to which any negative incidents could have disproportionate impact on the
organisations and end users that rely on them.
Article 51: Classification of GPAI
models as GPAI models with
systemic risk
1. A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets
any of the following conditions:
(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies,
including indicators and benchmarks;
(b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it
has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in
Annex XIII.
2. A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1,
point (a), when the cumulative amount of computation used for its training measured in floating point
operations is greater than 10(^25).
• How is ‘high impact capability’ assessed?
High impact is understood to apply to any model where the
cumulative computing power used for its training is greater than
10^25 floating point operations, or FLOPS.