Posted: 19/05/2021
The European Commission has announced its proposed first-ever legal framework for regulating artificial intelligence (AI) systems (the framework is referred to throughout this article as the ‘AI regulation’). With ‘trustworthy’ AI at the heart of the AI regulation, it is hoped that it will establish the right balance between providing safety and encouraging innovation within the EU.
The AI regulation will apply to AI systems. The definition encompasses providers, importers, distributors and operators/users of AI. This is considerably broader than definitions used in other jurisdictions and intentionally so, as the Commission recognises the fast-paced nature of changes in the sector and aims to ensure the definition is both ‘technology neutral and future proof’.
A risk-based approach is suggested. AI systems will be differentiated into categories based on their risk of harm to the health and safety of individuals or an adverse impact on the rights and protections enshrined in the EU Charter of Fundamental Rights.
The categories are:
Title II of the AI regulation prohibits AI systems that pose an unacceptable risk to fundamental rights under the charter. This bans AI systems that deploy subliminal techniques or exploit a person’s vulnerabilities (including age) to distort behaviour in a way that causes or is likely to cause harm. Additionally, AI systems used by governments as a form of ‘social scoring’ would be prohibited.
The AI regulation also addresses the live use of remote biometric identification systems in public spaces for law enforcement. In principle, this is prohibited by Title II - however, there is a narrow set of exceptions. These include the search for specific victims of crime (such as a missing child); the prevention of a specific, substantial and imminent threat to life or safety of persons or of a terrorist attack; or the detection of a perpetrator or suspect of a criminal offence.
Other AI systems that are identified as high risk will be subject to numerous strict obligations before they can be placed on the market or used in the EU. This includes the creation and implementation of a risk management system which must be maintained for the entire lifecycle of the AI system.
Other provisions under Title III of the AI regulation include:
The requirements for AI systems identified as posing low risk are considerably less stringent, with a focus on transparency. Title IV provides such obligations for AI systems that (i) interact with humans; (ii) are used to detect emotions or determine association with (social) categories based on biometric data; or (iii) generate or manipulate content (ie deep fakes).
The AI regulation encourages free use of AI systems identified as posing minimal risk and the Commission has acknowledged that the majority of AI falls into this category.
In addition to the above requirements, the AI regulation introduces a number of measures in support of innovation within this area. These include national regulatory sandbox schemes and measures to reduce the regulatory burden on SMEs and start-up companies.
The AI regulation will be enforced by existing systems of governance at member state level. To ensure cooperation at EU level, the Commission additionally proposes to establish a European Artificial Intelligence Board, comprising representatives from both the Commission and member states.
Notably, infringements may result in fines up to €30,000,000 or, for companies, up to 6% of total worldwide annual turnover for the preceding financial year – whichever is higher.
The EU has already demonstrated itself as a front-runner in leading the way on ethical AI and the proposed AI regulation is another step in this direction. This will be a groundbreaking piece of legislation which, if passed, will need to be adopted by member states. Once adopted, the AI regulation would be directly applicable across the EU.
Although the AI regulation will not be directly applicable in the UK, its reach will still be felt in two ways. Firstly, the AI regulation will apply if an AI system is available in the EU or its use affects people located in the EU. Secondly, there have been recommendations to regulate AI in the UK and therefore the AI regulation may form a critical example of how to achieve this.
In doing so, the AI regulation will test the relationship between the law and AI. Whilst EU legislation can take up to 18 months to adopt and implement, the breakneck speed of AI’s evolution is well documented. This raises questions as to whether the law will be able to move fast enough to effectively regulate constantly evolving AI. Either way, businesses will need to remain abreast of the changing legal landscape to avoid infringing the law and incurring significant fines.
This article has been co-written with Gabby Bytautaite, a trainee solicitor in the commercial dispute resolution team.