The European Commission passed the AI Act, described as the first-ever legal framework on AI, in combination with the AI Innovation Package and the Coordinated Plan on AI, to promote innovation in AI across the EU.
The European Parliament and the Council of the EU reached a political agreement on the AI Act in December 2023.
The European AI Office was established in February 2024 and oversees the AI Act's enforcement and implementation with the EU member states.
The objective of the European AI Office is to ensure that AI is safe and trustworthy.
The purpose of the AI Act was to provide AI developers and deployers with "clear requirements and obligations regarding specific uses of AI". AI systems must respect fundamental rights, safety, and ethical principles. It was recognized that AI models are very powerful and impactful with regard to risked posed by those systems.
The EU highlighted that it is not possible to determine why an AI system reached a particular decision or prediction. An example was used of an individual being denied employment without knowing why that decision was made. All AI systems that present a clear threat to safety, livelihoods, and rights of people will be banned.
The rules are intended to:
- address AI risks
- prohibit unacceptable risks
- identify and set rules for high risk applications
- require a conformity assessment prior to deployment
- enforce AI on the market
- establish a governance structure
The regulatory framework defines four levels of AI system risk: minimal, limited, high, and unacceptable.
There are strict obligations for high-risk AI systems (i.e. transport infrastructure, education, employment, financial services, law enforcement, immigration, and the judicial system):
- adequate risk assessment and mitigation systems
- high quality datasets
- activity logs
- detailed documentation
- clear information to the deployer
- appropriate human oversight
- high level of robustness, security, and accuracy
An example was used that all remote biometric identification systems in publicly accessible spaces for law enforcement is prohibited, only with strictly defined exceptions, such as searching for a missing child or an imminent terrorist threat.
Limited risk refers to lack of transparency. An example was used that users should be told that they are interacting with a chatbot. Another example was informing the public with labels "artificially generated".
Examples of minimal or no risk are span filters, which are granted free use.
The AI Office tasks include:
- Supporting the AI Act and enforcing general-purpose AI rules
- Strengthening the development and use of trustworthy AI
- Fostering international cooperation
- Cooperation with institutions, experts and stakeholders
The AI Office also oversees the AI Pact for businesses and stakeholders to share best practices.
The AI Innovation Package, launched in January 2024 by the Commission, was intended to support startups and experts to develop trustworthy AI.
The Coordinated Plan on Artificial Intelligence is intended to accelerate investment in AI, implement strategy, and develop AI policies to maximize Europe's potential to complete globally.
Sources
AI Act (European Commission) https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai [Accessed 4/6/2024]
EU AI Act: first regulation on artificial intelligence (European Parliament) https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence [Accessed 4/6/2024]
European AI Office (European Commission) https://digital-strategy.ec.europa.eu/en/policies/ai-office [Accessed 4/6/2024]
Commission launches AI innovation package to support Artificial Intelligence startups and SMEs (European Commission) https://ec.europa.eu/commission/presscorner/detail/en/ip_24_383 [Accessed 4/6/2024]
Coordinated Plan on Artificial Intelligence (European Commission) https://digital-strategy.ec.europa.eu/en/policies/plan-ai [Accessed 4/6/2024]