eu act
eu act
19-12-2023 - 11:45
20230601STO93804
As part of its digital strategy, the EU wants to regulate artificial intelligence (AI) to ensure better
conditions for the development and use of this innovative technology. AI can create many
benefits, such as better healthcare; safer and cleaner transport; more efficient manufacturing;
and cheaper and more sustainable energy.
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It
says that AI systems that can be used in different applications are analysed and classified
according to the risk they pose to users. The different risk levels will mean more or less
regulation. Once approved, these will be the world’s first rules on AI.
Parliament also wants to establish a technology-neutral, uniform definition for AI that could be
applied to future AI systems.
Learn more about Parliament’s work on AIand its vision for AI’s future
Unacceptable risk
Unacceptable risk AI systems are systems considered a threat to people and will be banned.
They include:
Some exceptions may be allowed for law enforcement purposes. “Real-time” remote biometric
identification systems will be allowed in a limited number of serious cases, while “post” remote
biometric identification systems, where identification occurs after a significant delay, will be
allowed to prosecute serious crimes and only after court approval.
High risk
AI systems that negatively affect safety or fundamental rights will be considered high risk and
will be divided into two categories:
1) AI systems that are used in products falling under the EU’s product safety legislation. This
includes toys, aviation, cars, medical devices and lifts.
2) AI systems falling into specific areas that will have to be registered in an EU database:
All high-risk AI systems will be assessed before being put on the market and also throughout
their lifecycle.
Generative AI, like ChatGPT, would have to comply with transparency requirements:
High-impact general-purpose AI models that might pose systemic risk, such as the more
advanced AI model GPT-4, would have to undergo thorough evaluations and any serious
incidents would have to be reported to the European Commission.
Limited risk
Limited risk AI systems should comply with minimal transparency requirements that would allow
users to make informed decisions. After interacting with the applications, the user can then
decide whether they want to continue using it. Users should be made aware when they are
interacting with AI. This includes AI systems that generate or manipulate image, audio or video
content, for example deepfakes.
Next steps
On December 9 2023, Parliament reached a provisional agreement with the Council on the AI
act. The agreed text will now have to be formally adopted by both Parliament and Council to
become EU law.
Before all MEPs have their say on the agreement, Parliament’s internal market and civil liberties
Briefing
Artificial Intelligence Act
Q&A: artificial intelligence