0% found this document useful (0 votes)
56 views52 pages

AI Act

Uploaded by

Anjali Tripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views52 pages

AI Act

Uploaded by

Anjali Tripathi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 52

European Union’s

Artificial Intelligence
Act
Overview
• Background- Historic Timeline, OECD AI principles
• What is the AIA? What does it aim to do?
• Subject Matter of the AIA
• What is an ‘AI system’ under the AIA?
• Actors under the AI Act: Providers, Deployers, Importer, Operator
• Scope of the AIA: Extraterritorial effect
• GPAI models and obligations of GPAI model providers
Introduction to the AI Act
• The European Union’s AI act is a European union regulation concerning AI and
establishes a common regulatory legal framework for AI within the European
Union.
• The AI act came into force on the 1st of August 2024.
• The AI act places risk and technology-based obligations on organizations that
develop, use, distribute or import AI systems in the EU, coupled with high fines
for non-compliance.
• The AI act generally adopts a risk-based approach to the deployment and use of
AI systems. AI systems deemed to pose an “unacceptable” level of risk are
outright banned and other AI systems are placed within a “risk tier” – with
corresponding levels of compliance obligations.
EU’s path towards effective AI
regulation
• In April 2018, the European Commission outlined the European
approach to boost investment and set ethical guidelines for the
regulation of AI.
• The European Commission proposed a three-pronged approach to
increase public and private investment in AI, prepare for socio
economic changes and to ensure an appropriate ethical and legal
framework.
• In June 2018, the Commission appointed an expert group on AI and
launched the European AI Alliance.
• In 2019, these bodies(HLEG and European AI Alliance) collaborated to
publish the ethics guidelines for trustworthy AI.
• The High-Level Expert Group on AI wrote the guidelines, after
consulting with the members of the European AI Alliance, which is a
multi-stakeholder forum created to provide feedback on regulatory
initiatives regarding AI
European Commission's Guidelines
for Trustworthy AI
• According to the guidelines, trustworthy AI should be lawful, ethical and robust.
• These Guidelines set out a framework for achieving Trustworthy AI.
• The framework does not explicitly deal with Trustworthy AI’s first component (lawful AI).
• Instead, the guidelines aimed to offer guidance on the second and third components: fostering
and securing ethical and robust AI.
• These Guidelines sought to go beyond a list of ethical principles, by providing guidance on how
such principles can be operationalised.
• These guidelines laid down seven key ethical requirements that AI systems should meet to be
trustworthy:
Human agency and oversight, technical robustness and safety, privacy and data governance,
transparency, diversity, non-discrimination and fairness, societal and environmental well-being,
and accountability.
• In February 2020, the European Commission built on these guidelines through its
White Paper titled “On Artificial Intelligence: A European Approach to Excellence
and Trust”.
• The White Paper announced upcoming regulatory action and presented certain
key elements of the future legal framework. Among these key elements was the
the risk-based approach suggesting that mandatory legal requirements, derived
from the ethical principles, should be imposed on certain AI Systems.
• Further, the White Paper was followed by a public consultation process that
involved many stakeholders from various backgrounds which influenced the
drafting of the AI Act
 In April 2021, the European Commission published a proposal to
regulate the AI Act. The Commission released a proposal for a
regulation aiming to harmonise rules on artificial intelligence (AI act)
and a coordinated plan which includes a set of joint actions for the
Commission and member states.

 In December 2022, The Council of the EU adopted its position


('general approach') on the artificial intelligence act.
• In June 2023, the European Parliament adopted its negotiating position on the
AI Act.
• The Council and the European Parliament reached a provisional agreement on
the law on 9 December 2023, after months of negotiations.
• The Council approved the artificial intelligence (AI) act, a law aiming to
harmonise rules on artificial intelligence. The legislation follows a ‘risk-based’
approach, which means the higher the risk to cause harm to society, the stricter
the rules.
• In May 2024 , the European Council formally adopted the EU AI Act.
• In July 2024, the AI Act is published in the Official Journal of the European
Union.
• In August 2024, the Act entered into force.
AI Act’s Implementation Timeline

Eighteen months after entry


Entry Into force: Six months after entry into
Twelve months after entry into into force:
1st August 2024 force:
force: European Commission to
At this time none of the Prohibition on certain AI
Obligations will come into effect provide guidelines specifying
provisions apply, they will begin systems (Chapter I and II) will
for GPAI. (Chapter V) practical implementation of
to apply gradually over time. apply
post-market monitoring plan

By 2031:
Twenty-Four months after Thirty-six months after entry Thirty-six months after entry Commission will carry out an
entry into force: into force: into force: assessment of enforcement of
Obligations go into effect for Obligations will go into effect for Obligations will come into effect this regulation and shall report
high-risk AI systems specifically high-risk AI systems which are for high-risk AI systems not it to the European Parliament,
listed in Annex III not prescribed in Annex III mentioned in Annex III the Council and the European
Economic and Social Committee
Risk Classification under AI Act

The second category, high risk AI systems, can have


The first category is deemed to present a significant impact on health, safety or
“unacceptable” level of risk . AI systems falling within fundamental rights and therefore need to comply
this category are prohibited. with a large range of obligations to mitigate these
risks.

The fourth category is termed as “minimal risk” AI


The third category are termed as “limited risk” AI systems, which are generally not considered to be
systems are subjected to disclosure and transparency high risk, not subject to transparency requirements
obligations. and thus are not subject to regulation by the AI Act.
Article 1: Subject Matter
Article 1

1. The purpose of this Regulation is to improve the functioning of the


internal market and promote the uptake of human-centric and
trustworthy artificial intelligence (AI), while ensuring a high level of
protection of health, safety, fundamental rights enshrined in the
Charter, including democracy, the rule of law and environmental
protection, against the harmful effects of AI systems in the Union and
supporting innovation.
2. This Regulation lays down:
(a) harmonised rules for the placing on the market, the putting into
service, and the use of AI systems in the Union;
(b) prohibitions of certain AI practices;
(c) specific requirements for high-risk AI systems and obligations for
operators of such systems;
(d) harmonised transparency rules for certain AI systems;
(e) harmonised rules for the placing on the market of general-purpose
AI models;
(f) rules on market monitoring, market surveillance, governance and
enforcement;
(g) measures to support innovation, with a particular focus on SMEs,
including start-ups.
• The EU has adopted a human centric approach to the regulation of AI which draws from
the seven principles adopted by the High-Level Expert Group in its Ethics Guideline for
Trustworthy AI: human agency and oversight, technical robustness and safety, privacy
and data governance, transparency, Diversity, non-discrimination and fairness, social
environment and well-being.
• The AI Act aims to improve the European market by promoting the use of AI that is safe,
respects human rights and protects health, safety and the environment. It sets out rules
for how AI can be sold, used, and monitored in the EU, and prohibits certain AI practices.
• The AI Act also touches upon how organizations developing mechanisms for market
monitoring and surveillance are expected to operate.
• The Act aims to prescribe harmonized rules and introduce a degree of
standardization for regulating AI systems for use within the EU.
• The AI Act also prohibits and restricts the development of AI systems
that may pose an unacceptable risk to the fundamental rights and
safety of citizens.
• Based on its risk-based approach, the AI Act also categorizes certain
AI systems as high risk, which are subject to more stringent
compliance obligations.
• Additionally, the Act also emphasizes transparency rules for certain AI
systems.
• The Act regulates the use of GPAI models and provides obligations for
providers of such models.
• The Act also provides necessary information on what measures must
be adopted by organizations related to innovation in AI.
Article 3(1): AI System
Article 3(1)

An AI system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives,
how to generate outputs such as predictions, content, recommendations, or decisions that can influence
physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after
deployment.
Differences between an AI model and an AI
system

AI models are generally used to


An AI model usually comprises of a recognize patterns and make
An AI model is a specific
statistical representation of a predictions based on data and are
AI Model: component usually focused on a
specific problem developed using optimized through training process
precise task.
data. where the model learns from the
data to improve its accuracy.

AI systems include not just AI


AI systems are a much broader and models but also other relevant An AI system is a complete
more complex system which may components such as necessary framework, and an AI model is a
AI Systems:
integrate one or more AI models to components to collect and process component within the operational
accomplish a task. data, user interfaces to interact framework of an AI system.
with the users. Etc.
AI System: Input
• Input is used both during development and after deployment.
Input can take the form of knowledge, rules, and code that humans put into the
system during development or data.
• During development, input is leveraged to build AI systems, e.g., with machine
learning that produces a model from training data and/or human input.
• Input is also used by a system in operation, for instance, to infer how to generate
outputs. Input can include data relevant to the task to be performed or take the
form of, for example, a user prompt or a search query.
AI System:
Input
• Prior to the deployment stage, an AI system is built using one or more ‘models’ developed either
manually or automatically (that is, with either reasoning or decision-making algorithms), based on
machine/human input/data.
AI systems may be developed using machine learning or could be symbolic/knowledge based:
• AI systems built using machine learning, have some intelligence taking place because, during
their training, they figure out relationships between model parameters without precise
instructions.
For example, language models are given large amounts of language resources and given the
objective to “predict the next word”, producing a trained model that appears to respond
intelligently to prompts .
• When building a symbolic AI system manually, humans supply the knowledge and the
vocabulary in which it is expressed. Here, we consider the knowledge engineer to be the source
of part of the intelligence – i.e., the AI system does not discover the knowledge it uses from its
own experience.
AI System: Implicit or Explicit
Objectives
• AI systems objectives can be explicit (where they are directly
programmed by the developer)

• Can be implicit (via a set of rules specified by a human, or when the


system is capable of learning new objectives)
• For instance: a large language model where the objectives of the
system have not been explicitly programmed but have been acquired
in process through imitation learning from human-generated text and
partly from the process of reinforcement learning from human
feedback
AI System: Ability to “infer”
• The concept of “inference” generally refers to the step in which a system
generates an output from its inputs, typically after deployment.

• When performed during the build phase, inference, in this sense, is often used to
evaluate a version of a model, particularly in the machine learning context.

• “infer how to generate outputs” should be understood as also referring to the


build phase of the AI system, in which a model is derived from inputs/data.
Recital 12: Ability to infer
• A key characteristic of AI systems is their capability to infer.
This capability to infer refers to the process of obtaining the outputs, such
as predictions, content, recommendations, or decisions, which can
influence physical and virtual environments, and to a capability of AI
systems to derive models or algorithms, or both, from inputs or data.

The techniques that enable inference while building an AI system include


machine learning approaches that learn from data how to achieve certain
objectives, and logic- and knowledge-based approaches that infer from
encoded knowledge or symbolic representation of the task to be solved.
The capacity of an AI system to infer transcends basic data processing by
enabling learning, reasoning or modelling.
AI System: Autonomy and
Adaptiveness
'AI system autonomy' refers to 'the degree to which a system can learn
or act without human involvement’ following the delegation of
autonomy by humans.
Human supervision can occur at any stage of the AI system lifecycle,
such as during AI system design, data collection and processing,
development, verification, validation, deployment, or operation and
monitoring.
Some AI systems can generate outputs without these outputs being
explicitly described in the AI system's objective and without specific
instructions from a human.
• Adaptiveness contained in the definition of an AI system, is usually
related to AI systems based on machine learning that can continue to
evolve their models after initial development.

Examples include a speech recognition system that adapts to an


individual’s voice or a personalised music recommender system.
AI systems can be trained once, periodically, or continually.

Through such training, some AI systems may develop the ability to


perform new forms of inference not initially envisioned by their
developers.
Recital 12: Adaptiveness and
Autonomy of AI systems
• AI systems are designed to operate with varying levels of autonomy,
meaning that they have some degree of independence of actions
from human involvement and of capabilities to operate without
human intervention.
The adaptiveness that an AI system could exhibit after deployment,
refers to self-learning capabilities, allowing the system to change
while in use.
Taxono Providers Deployers
my of
Actors Operators Importers
in the AI
Act Distributer
Authorized
Representative
Article 3(3): Provider
Art 3 (3) ‘provider’
means a natural or legal person, public authority, agency or other body that develops an AI system
or a general-purpose AI model or that has an AI system or a general-purpose AI model developed
and places it on the market or puts the AI system into service under its own name or trademark,
whether for payment or free of charge;
Article 3(3): Provider
• At the center of the AI value chain is the person who develops the AI
system or GPAI model under its own trademark.
• This includes developers of AI systems and GPAI models when their
products or output of their products reaches the EU market under
certain circumstances.
- Placing AI on the market: refers to first making available an AI System
or a GPAI Model on the EU market;

- Putting AI into service: refers the supply of an AI System for first use
directly to the Deployer or for own use in the EU market; and

- Producing AI output: means the development of an AI System that


produces output used in the EU. (Article 2(1)(c ).
Article 3(4): Deployer
Article 3(4) ‘deployer’
A ‘deployer’ means any natural or legal person, public authority, agency or other body using an AI
system under its authority except where the AI system is used in the course of a personal non-
professional activity.
Article 3(4): Deployer
Parties that use AI systems in connection with their professional or commercial
activity that:
• are established or located in the EU; or
• use AI systems to generate outputs that are used in the EU. (Article 2(1)(c ))

• For instance- a company using a third-party AI system for customer service or


employee monitoring would be regarded as a deployer.
Circumstances under which
Deployers can become Providers
• A deployer may be considered a provider of a high-risk AI system
under the Act and will be bound by the obligations of a provider
under Article 16 in any one of the following circumstances:
(Article 25 (1)).
- a deployer may become a provider is where the deployer places its own
trademark on a high-risk AI system which has already been placed on the market.
Deployers may be deemed providers of high-risk AI systems from the potential use
of a particular high-risk AI system under their own brand (i.e. as opposed to using
such system under the brand of a third party).

- the original AI system is high-risk, and its customisations result in an AI system


that is different from the original (i.e. it makes a "substantial modification") but
the system remains high risk; or

- it modifies an AI system not originally classified as high-risk in such a way that it


becomes high risk.
Article 3: Other Actors
• (6) Importer: Parties located or established in the EU that place an AI system on
the market in the EU under the name or trademark of a person or legal entity
outside the EU.

• (7) Distributor: ‘distributor’ means a natural or legal person in the supply chain,
other than the provider or the importer, that makes an AI system available on the
Union market;

• (8) Operator: means a provider, product manufacturer, deployer, authorised


representative, importer or distributor;
Article 2: Scope
Article 2: Scope

1. This Regulation applies to:


(a) providers placing on the market or putting into service AI systems or placing on the market general-purpose AI
models in the Union, irrespective of whether those providers are established or located within the Union or in a
third country;
(b) deployers of AI systems that have their place of establishment or are located within the Union;
(c) providers and deployers of AI systems that have their place of establishment or are located in a third country, where
the output produced by the AI system is used in the Union;
(d) importers and distributors of AI systems;
(e) product manufacturers placing on the market or putting into service an AI system together with their product and
under their own name or trademark;
(f) authorised representatives of providers, which are not established in the Union;
(g) affected persons that are located in the Union.
Article 2: Scope
The AI Act covers all major AI models, systems, and applications that are placed
on the market, put into service, or are expected to be used within the European
Union, regardless of whether the provider or user is physically based in the EU or
not.
The scope of its application includes:
 Providers placing on the market or putting into service AI systems or placing on
the market general-purpose AI models in the EU, regardless of whether they are
located in the EU or not;
 Deployers of AI systems that have their place of establishment or are located in
the EU;
Article 2: Scope
 Providers and deployers of AI systems who have their place of establishment or
who are located in a third country where the output of their AI system is used
within the EU:
 Importers and distributors of AI systems;
 Product manufacturers putting into service or placing on the market an AI
system together with their product or under their own name or trademark;
 Authorized representatives of providers not established in the EU; and
 Affected persons located in the EU.
Article 2: Scope (Exemptions)
• It does not apply to AI systems used for military, defence, or national security
purposes, or to AI systems used by foreign public authorities or international
organizations for law enforcement and judicial cooperation, provided they
protect individuals' rights.
• The regulation also does not apply to AI systems used for scientific research and
development,
• It does not apply to AI systems not yet on the market.
• It does not affect existing EU laws on data protection, privacy, and confidentiality.
• It also does not apply to individuals using AI systems for personal, non-
professional activities, or
• It does not apply to AI systems released under free and open-source licenses,
unless they are high-risk or fall under certain articles.
General Purpose AI System
• What are general purpose AI systems under the AI act?
• Why regulate GPAI systems?
Regulation of General-Purpose AI
models under AI Act
• A General Purpose AI (GPAI) model is defined as an AI model that displays
significant generality and is capable of competently performing a wide range of
tasks. These AI models can be integrated into a variety of downstream systems or
applications.

• GPAI models are foundation models are are trained on large datasets and can be
used for many different tasks without much fine tuning.
General Purpose AI models
• Article 3(63)

“AI model, including when trained with a large amount of data using self-supervision at scale, that displays
significant generality and is capable to competently perform a wide range of distinct tasks regardless of the
way the model is placed on the market and that can be integrated into a variety of downstream systems or
applications.”
Key Characteristics of GPAI models
 Generality - GPAI models can perform a wide range of distinct tasks. GPAI models are also
often referred to as Foundation Models. They have purposefully been designed to perform a
wide range of tasks and to easily adapt to new situations.
To this end, they are trained on very broad sets of unlabelled data and can be used for many
different tasks without much fine-tuning. This also means that they can be used in and
adapted to a wide range of applications for which they were not originally, intentionally or
specifically designed .

 Training data - The models are typically trained on large datasets through various methods,
such as self-supervised, unsupervised, or reinforcement learning.

 Integration - While essential, these models alone do not constitute AI systems. They require
additional components, such as user interfaces, to be integrated into various downstream
systems or applications.
GPAI Systems
• GPAI models require the addition of further components, such as a user interface,
to become AI systems. When fitted with a user interface and placed on the
market for non-high-risk purposes, they usually become a ‘GPAI system’ meaning
an AI system which is based on a GPAI model and which has the capability to
serve a variety of purposes, both for direct use as well as for integration in other
AI systems ( Article 3(66) AI Act).

• AI model, including a GPAI model, can be an essential part of an AI system (in the
definition referred to as “downstream systems”), but does not constitute such
system on its own. (For instance- Large language models)
Risk Classification for GPAI under AI
Act
• The AI Act divides GPAI models into two categories. The first category is simply
those models that count as GPAI; the second category applies to a subset of these
models that are identified as posing a systemic risk.

• GPAI models that entail systemic risk are deemed to have high impact capabilities
due to which any negative incidents could have disproportionate impact on the
organisations and end users that rely on them.
Article 51: Classification of GPAI
models as GPAI models with
systemic risk
1. A general-purpose AI model shall be classified as a general-purpose AI model with systemic risk if it meets
any of the following conditions:
(a) it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies,
including indicators and benchmarks;
(b) based on a decision of the Commission, ex officio or following a qualified alert from the scientific panel, it
has capabilities or an impact equivalent to those set out in point (a) having regard to the criteria set out in
Annex XIII.

2. A general-purpose AI model shall be presumed to have high impact capabilities pursuant to paragraph 1,
point (a), when the cumulative amount of computation used for its training measured in floating point
operations is greater than 10(^25).
• How is ‘high impact capability’ assessed?
High impact is understood to apply to any model where the
cumulative computing power used for its training is greater than
10^25 floating point operations, or FLOPS.

• FLOPS per second measure a computer’s processing speed.


More FLOPS means more power which in turn means higher risk.
Annex XIII: Criteria for the designation of GPAI
models with ‘systemic risk’ referred to in Article 51
For the purpose of determining that a general-purpose AI model has capabilities or an impact equivalent to
those set out in Article 51(1), point (a), the Commission shall take into account the following criteria:
(a) the number of parameters of the model;
(b) the quality or size of the data set, for example measured through tokens;
(c) the amount of computation used for training the model, measured in floating point operations or indicated
by a combination of other variables such as estimated cost of training, estimated time required for the training,
or estimated energy consumption for the training;
(d) the input and output modalities of the model, such as text to text (large language models), text to image,
multi-modality, and the state of the art thresholds for determining high-impact capabilities for each modality,
and the specific type of inputs and outputs (e.g. biological sequences);
(e) the benchmarks and evaluations of capabilities of the model, including considering the number of tasks
without additional training, adaptability to learn new, distinct tasks, its level of autonomy and scalability, the
tools it has access to;
(f) whether it has a high impact on the internal market due to its reach, which shall be presumed when it has
been made available to at least 10 000 registered business users established in the Union;
(g) the number of registered end-users.
Article 55: Obligations for providers
of GPAI with ‘systemic risk’
1. In addition to the obligations listed in Articles 53 and 54, providers of general-purpose AI models with systemic
risk shall:
(a) perform model evaluation in accordance with standardised protocols and tools reflecting the state of the art,
including conducting and documenting adversarial testing of the model with a view to identifying and mitigating
systemic risks;
(b) assess and mitigate possible systemic risks at Union level, including their sources, that may stem from the
development, the placing on the market, or the use of general-purpose AI models with systemic risk;
(c) keep track of, document, and report, without undue delay, to the AI Office and, as appropriate, to national
competent authorities, relevant information about serious incidents and possible corrective measures to address
them;
(d) ensure an adequate level of cybersecurity protection for the general-purpose AI model with systemic risk and
the physical infrastructure of the model.
Article 55: Obligations for providers
of GPAI with ‘systemic risk’
(2) Providers of general-purpose AI models with systemic risk may rely on codes of practice within the meaning
of Article 56 to demonstrate compliance with the obligations set out in paragraph 1 of this Article, until a
harmonised standard is published. Compliance with European harmonised standards grants providers the
presumption of conformity to the extent that those standards cover those obligations. Providers of general-
purpose AI models with systemic risks who do not adhere to an approved code of practice or do not comply with
a European harmonised standard shall demonstrate alternative adequate means of compliance for assessment
by the Commission.

You might also like