In its recent addition to its digital strategy, the European Parliament and Council have reached a political agreement and introduced the Artificial Intelligence Act, alternatively termed the ‘AI Act’. As a law, it is quite extensive, making it the first of its kind. 

Several months of negotiation between the European Parliament and the Council followed the European Commission’s initial proposal in April 2021. It paves the way to fulfilling the EU’s objective of providing legal certainty and safety within a unified market in the realm of AI. The suggested measure was later approved unanimously by all Member States on February 2nd and is currently slated for a plenary vote scheduled to take place between April 10th and April 11th.

By proposing such regulation, the EU aims to provide its member states with a comprehensive legal framework which strives to guarantee the responsible use of AI systems. 

The Act defines what may classify as an AI system under Article 3(1) of the draft stating that an ‘artificial intelligence system’ means;

“AI system means a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;’ 

Aside from this definition, the Act follows a risk-based approach into four classifications of AI systems. These being: 

  • Minimal risk 
  • Limited risk 
  • High risk 
  • Unacceptable Risk 

Minimal Risk & Limited Risk: The majority of AI systems are considered low-risk, with applications like AI-powered recommender systems or spam filters enjoying exemptions and no mandatory obligations due to their minimal or non-existent risk to citizens’ rights or safety. However, companies may opt to voluntarily adhere to extra codes of conduct for these AI systems.

High Risk:  Such Instances of high-risk AI systems encompass various critical infrastructures like those in water, gas, and electricity sectors, along with medical devices. Additionally, systems governing access to educational institutions or personnel recruitment, as well as those employed in law enforcement, border control, justice administration, and democratic processes, are considered high-risk. Furthermore, biometric identification, categorization, and emotion recognition systems fall into the high-risk category. These systems are subject to strict requirements some of which include risk-mitigation systems, detailed documentation, and human oversight. 

Unacceptable Risk: With this being the highest form of risk, systems which fall under this category are found to be in breach of personal security and fundamental human rights, leading to such systems becoming banned. This category may also prohibit some biometric systems such as emotion recognition systems used at the workplace. 

Each classification has its tailored obligations and limitations entrusted upon the providers, such as transparency obligations. Failure to adhere to such obligations will consequently result in fines, with such fines ranging from €35 million or 7% global annual turnover (whichever may be higher) for violations of banned AI applications, €15 million or 3% global turnover for violations of other obligations and €7.5 million or 7.5% global turnover for the supply of wrong information. 

The implementation and adherence to such rules on a European level will be overseen by the newly established ‘European AI Office’ within the European Commission. This will act as the first global enforcement body on AI which also acts as an international reference point. 

 

This article is not intended to constitute legal advice and neither does it exhaust all relevant aspects of the topic.

Author: Julia Gauci  

Close Bitnami banner
Bitnami