What do you need to know about the AI Act?

Author info

On April 21, 2021, the European Commission published the much awaited draft of the regulation laying down harmonized rules on artificial intelligence (AI Act). The publication is the intermediary outcome of a long discussion and evaluation process. Timelex was privileged to support the Commission as a legal expert in a study on liability for AI, which provided some of the analysis that drove the proposal. 

This article briefly discusses the most important areas covered by the AI Act, the approach proposed by the Commission, as well as the next steps to be expected. 

1.    What does AI system mean and what is the purpose of the regulation? 

AI Act defines the “artificial intelligence system” (AI system) as a software that fulfils the following two conditions: 

  • is developed with one or more of the techniques and approaches listed in Annex I of the AI Act. Those include, for example, machine learning (ML), logic or knowledge based and statistical approaches and 
  • can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

The high level purpose of the AI Act is to provide a uniform legal framework for the development, marketing and use of AI systems, where both benefits and risks of AI are adequately addressed at Union level. 

2.    What does “risk based” approach mean? 

Not all AI systems are treated by the proposed regulation in the same way. The Commission’s approach is “risk based”, classifying the AI systems according to the risk they can cause to human beings:

  • unacceptable risk – those AI practices Commission considers a clear threat to EU citizens and introduces a ban of their use;
  • high-risk – AI systems which may impact humans’ safety or basic rights will be strictly regulated, as further described below;
  • limited risk –because of their interaction with humans, they may create certain impact. For those systems, transparency requirements will apply. Users will need to be informed that they are talking to or being serviced by a machine. 
  • minimal risk – other AI, for which no additional obligations are provided in the AI Act. 

The Commission emphasizes that most AI systems should fall into the latter category. However, this will remain to be seen, as some types of high-risk AI systems or even prohibited systems seem to be quite broadly defined. 

3.    Which AI practices will be forbidden in the EU? 

It will be banned to place on the market, put into service and use: 

  • AI which uses subliminal components or exploit human vulnerabilities to distort person’s behaviour in a harmful way - 
    • AI deploying subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;
    • AI systems  which exploit vulnerable groups of people to materially distort their behaviour or cause them harm;
  • Social scoring systems - AI used by public authorities for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or personal traits, if they cause detrimental treatment that is unrelated to that person or disproportionate to their behaviour;
  • Real-time remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless one of the narrow exceptions applies. 

AI systems used or developed for exclusively for military purposes are outside of the scope of the proposed regulation. This limitation is dictated by the legal basis of the act, which focuses on internal market challenges (article 114 of the TFEU), thereby not allowing the inclusion of matters of national defense. None the less, this leaves an important area of development of potentially the most dangerous systems outside the foreseen oversight. 

4.    What are high-risk AI systems?

There are two broad categories of high-risk AI systems. 

The first one encompasses systems which could produce adverse outcomes to health and safety of persons, in particular when they operate as components of products. For those systems both of the following conditions should be fulfilled:

  • the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the EU harmonisation legislation; and 
  • the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment when put onto the market. 

This category of AI systems includes, for example, components of toys, medical devices, lifts, as well as safety features of motor vehicles, rail systems, aircrafts, and other machinery. A full list of Union harmonisation legislation is provided in Annex II of the AI Act. 

The second category are stand-alone AI systems which may pose a threat to the fundamental rights of persons. Those include, for example, AI systems intended to be used:

  • for ‘real-time’ and ‘post’ remote biometric identification of natural persons; 
  • as safety components in the management and operation of critical infrastructure, such as road traffic and the supply of water, gas, heating and electricity 
  • for the purpose of determining access to educational institutions or assessing students or applicants 
  • for recruitment or evaluation of job applicants and employees
  • by public authorities to evaluate the eligibility of natural persons for public benefits and services
  • to evaluate the creditworthiness of natural persons or establish their credit score (with an exception for small scale providers for their own use)
  • to dispatch, or to triage emergency response services, such as medical aid or firefighters
  • for various tasks by the law enforcement authorities, including individual risk assessments, profiling of natural persona, crime analytics or lie detector tests,
  • for migration, asylum and border control management,
  • to assist the courts in legal research, interpreting facts and the law and in applying the law to a concrete set of facts.

The full listed is included in Annex III to the AI Act. The Commission will be empowered to amend this already long list by issuing delegated acts in the future, thereby providing somewhat greater flexibility in maintaining its relevance.  

5.    How will high-risk AI systems be regulated?

High-risk AI systems can only be placed on the EU market or put into service if they comply with strict mandatory requirements. They include in particular:

  • obligation to establish a risk management system, consisting of continuous and iterative process, present during the lifecycle of the high-risk system, oriented at identification and evaluation of risks associated with the AI system and adoption of suitable measures against them; 
  • quality criteria for the datasets used for training and testing of those systems.  Interestingly, the providers of the systems will be allowed to process special categories of personal data (in the meaning of Article 9(1) GDPR) to monitor the AI for bias and correct them. 
  • requirement to draw up and keep to date the technical documentation on the system;
  • design of high-risk AI systems should enable them to automatically log events while the system is operating. Logs shall be kept by the providers of the system, if the system will be under their control;
  • operation of the high-risk AI systems must be sufficiently transparent to allow the users to interpret the system’s output and use it appropriately; systems must come with instructions
  • high-risk AI systems must be designed to enable humans to oversee them effectively, including to understand the capacities and limitations of the AI. Appropriate oversight features may include ones allowing users to override the decisions of the AI or to interrupt the system by means of a “stop” button. This requirement is directly related to black box concerns, particularly in neural network based AIs, where not even the creators would be able to replicate or understand the logic applied by the the AI in arriving to any given conclusion. 
  • last, but not least, high-risk AI systems must be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, perform consistently in those respects throughout their lifecycle and are resilient, in particular against errors, inconsistency of the operating environment and malicious actors. 

Before a high-risk AI system is put on the market or deployed to service, it must a undergo conformity assessment procedure. Once completed, the system will have to be labelled with a CE marking. Certain high risk AI systems should be also registered in a EU database maintained by the
Commission. 

The AI Act also provides various obligations related to post market monitoring. There will be also a regime of notifying national authorities of non-compliance of the high-risk AI system and of any corrective actions taken.

Most of the responsibilities listed above will fall on the so-called “provider” of the AI system, i.e. a person or entity that that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, for money or free of charge. However, there are also obligations for AI importers, distributors and users. For instance, the user of the high-risk AI system must follow the instructions and “feed” the system with relevant output data. They must also monitor their operation. 

6.    What requirements apply to other AI systems? 

The proposed AI Act does not provide similarly detailed requirements for lower risk AI systems. However, transparency rules will apply, which means that the users will have to be notified about:  

  • AI systems intended to interact with natural persons (chat bots)
  • emotion recognition or a biometric categorisation systems,
  • machine generated content, such as images or videos, which resembles authentic persons or objects (deep-fakes).

Some exceptions are provided, e.g. if the fact that the user is interacting with an AI  is obvious from the circumstances and the context of use, for systems used to detect or investigate criminal offences or for scientific or artistic purposes. 

Developers of non-high-risk AI may also adhere to voluntary codes of conduct

7.    What new authorities and enforcement measures will be in place? 

The AI Act will set up a European Artificial Intelligence Board made up of representatives of the appropriate regulators from each member state, as well as the European Data Protection Supervisor and the Commission. EAIB will responsible for a number of advisory tasks. EU Members will also need to appoint national authorities. 

The national regulators will be charged with enforcement of the rules of the AI Act. They will be equipped with the power to impose “GDPR style” administrative fines. For the most serious infringements, the national authorities will be able to issue fines up to 30 000 000 EUR or, if the offender is company, up to 6 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. The highest fines are intended for putting on the market banned AI systems and failure to provide compliance with requirements for training of high-risk AI systems with quality datasets. 

8.    What are the next steps?  

AI Act needs to be adopted by the European Parliament and the Member States in the ordinary legislative procedure. Once passed, the law will enter into force on the twentieth day following its publication. As a EU regulation, the AI Act will be directly applicable in all EU countries.

Importantly, there will be a 2-year grace period within which the AI systems need to be brought into conformity with its requirements. 

Further reading:

Draft of the regulation here 

Questions and answers here