AI May 22, 2024

Decision on the AI Act on the 21 May

Decision on the AI Act

Background

The rapid development of artificial intelligence (‘AI’) in the recent years has certainly not escaped anyone’s attention and both the risks and opportunities have been heavily debated. As we wrote on 21 December last year, a new piece of EU legislation in the field of AI has been in the pipeline  for some time. On 21 May 2024, the AI Act (‘AIA’) was adopted and thus becomes the world’s first AI legislation.

The Regulation is based on a proposal from 2021 and has since been the subject of intense negotiations in Brussels. The Regulation can best be described as a compromise between the regulation of potentially harmful AI systems and the desire to strengthen the EU’s competitiveness by promoting the development of AI within the EU.

Main features of the Regulation

The AIA has a broad scope – both in terms of which AI systems fall under the definition and in terms of geographical applicability. Like other EU legal acts, such as the GDPR, the AIA will affect third-country actors that target users in the EU in different ways.

The regulation is designed according to a segmented model with different levels of risk classification. Higher requirements are imposed on AI systems associated with higher risk. The Regulation categorizes AI systems as follows:

 

Unacceptable risk

    • AI systems that threaten fundamental rights – e.g. biometric categorization and social scoring. These systems will be banned.

High risk

    • AI systems handling sensitive data – such as critical infrastructure, education, and justice.

Limited risk

    • AI systems that interact with individuals in ways other than the above – e.g. chatbots, deepfakes.

Minimal risk

    • AI systems that do not fall under any of the above categories – e.g. spam filters.

AI systems categorised as posing an unacceptable risk will not be allowed at all. AI systems categorised as high risk will be subject to the most stringent requirements. This means, for example, requirements for impact assessments, documentation requirements, human oversight and data security. For AI systems categorised as limited risk, this will mainly mean that a user must be informed that the content is generated by an AI system. For AI systems categorised as minimal risk, there will be no specific requirements, but operators are encouraged to follow voluntary codes of conduct. Specific requirements will also be imposed on so-called foundational models, which are AI systems used for general purposes – for example ChatGPT. These foundational models must, among other things, disclose the data used to train the model, prevent the generation of illegal content and account for energy consumption.

The requirements also depend on the type of actor involved. The AIA places higher requirements on manufacturers of AI systems than on end users. It is therefore important that actors subject to the regulation ensure both what  role they will be categorised as and what level of risk the AI system in question will fall under.

Enforcement and sanctions

Enforcement will be exercised at both national and EU level. An European AI Office has been established within the EU; it has not yet been decided which authority or authorities that will have the supervisory responsibility in Sweden. Sanctions in the form of penalties can be imposed for violations of the AIA accordingly:

  • the maximum of €35 million or up to 7% of global annual turnover – for infringements relating to unauthorized AI systems
  • the maximum of €15 million or up to 3% of global annual turnover – for infringements of other requirements under the AIA; and
  • the maximum of €7.5 million or up to 1% of global annual turnover – for providing incorrect information to supervisory bodies.

For small and medium-sized enterprises, the penalty shall be set at the lower of the amounts indicated above.

Implementation

Now that the Council has formally adopted the regulation on 21 May 2024, it will enter into force 20 days after its publication in the Official Journal of the EU, which is expected to happen in the coming days. Implementation will then be gradual – unauthorized AI systems will be covered by the Regulation after 6 months from entry into force, basic models will be covered after 12 months, while the rest of the Regulation will become fully applicable after 24-36 months. It should be noted, however, that even before then, AI systems are subject to other legislation such as the GDPR, product safety legislation, etc.

Comment

The AIA establishes a framework for AI that could potentially have a major impact outside the EU, providing greater predictability in the market and stronger protection of individual rights. It remains to be seen whether the AIA will have the same impact as the GDPR, but the EU’s ambitions cannot be mistaken. However, some questions remain: how effectively will the AIA be able to adapt to the rapid development of AI technology and how well did the EU strike the balance between regulating and promoting innovation?

Organisations exploring the possibilities and obligations of AIA should first identify the type of actor they are categorised as and the level of risk that may be involved. Where appropriate, they should also ensure procedures and designate responsible persons for compliance. In addition, related issues such as data management and information security should be considered.

We at Cirio are monitoring the ongoing development of the AIA closely. With our AI Readiness Scanner we can help organisations that want to work proactively on this issue.

Contact us and we will be happy to tell you more.

Contact

Team

Related news

Ursäkta, vi kunde inte hitta några inlägg. Försök med en annan sökning.