Kunskap 23.12.21

Article – A compromise agreement on the AI Act has been reached

A compromise agreement on the AI Act has been reached

Artificial intelligence (AI) is and has been a hot topic during the last couple of years. With the agreement between the Council of the European Union and the European Parliament on the 8th of December 2023, Europe has taken a giant leap towards the adoption of a union wide regulation specifically targeting AI.

Background

In April 2021, the European Commission presented a proposal for an AI Act with the intention to create a uniform and horizontal legal framework for AI. The proposal followed a risk-based approach which could best be described as: the higher risk, the stricter the rules.
The proposal has subsequently been handled by the Council and the Parliament, and on the 8th of December 2023 an agreement was finally reached. During the coming weeks the details will be finalized and thereafter the presidency will submit the compromise wording to the member states’ representatives for endorsement. The final proposal is expected to be presented in mid- January.

The content of the compromise agreement

Compared to the initial proposal, the main new elements of the compromise agreement can be summarised as follows:

• specific regulation on high-impact general-purpose AI models as well as high-risk AI systems,
• extension of the list of prohibited applications and use of AI,
• mandatory pre-launch fundamental rights impact assessment of high-risk AI systems, and
• establishment of an AI Office and other mechanisms for governance.

Important take-aways

We would especially like to highlight the following as important take-aways from the compromise agreement:

Foundation models

Further regulation has been suggested with regards to foundation models and large systems that can perform tasks, such as generating video, text, or computer code. In addition, it has been suggested that foundation models must comply with specific transparency obligations prior to their commercial deployment.

Fundamental rights impact assessment

According to the compromise agreement, deployers shall perform a fundamental rights impact assessment before putting a high-risk AI system on the market.

An AI Office and other governance bodies

An AI Office within the Commission will be set up tasked to monitor the most advanced AI models (general-purpose AI models) and enforce the common application of the rules in all member states. The AI Office will be able to seek advice from a scientific panel of independent experts.
In addition, an AI Board, comprised of member states´ representatives, will serve as a coordination platform and an advisory body to the Commission. Furthermore, an advisory forum will be set up for stakeholders to provide technical expertise to the AI Board.

Sanctions

The model for fines for violations of the AI Act resembles the approach of the GDPR, as fines are as a rule, set as a per centage of the breaching company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be EUR 35 million or 7 per cent for violations of prohibited AI applications, EUR 15 million or 3 per cent for violations of obligations of the AI Act and EUR 7,5 million or 1,5 per cent for supplying incorrect information.

Cirio’s comments on the AI Act and the future of AI

Cirio welcomes the AI Act as it will provide more clarity and predictability on the development and use of AI. However, while the AI Act will take a more comprehensive approach on AI, it is important to bear in mind that rules already apply for usage of AI and that such rules will continue to apply even after the adoption of the AI Act. In addition to legal assessments, other actions must be taken to achieve benefits from using AI. Hence, the AI Act will only be one part of the puzzle for businesses to consider in their AI-journey. Some of these elements are the following:

Correct and adequate data

As the development, usage and fine-tuning of AI-systems are inevitably linked to and presupposes correct and adequate data, it is vital to identify and prepare the data you wish to use properly prior to adding AI-functionality. If not, the quality of the output could be incorrect, and the results may not be trusted or even useful.

Restrictions on usage of data and IP

A vital aspect pertaining to data and AI is the possibility to use data. If restrictions apply to such data, whether regulatory or contractual, it may not be possible to use such data. Alternatively, usage could lead to sanctions or breach of contracts. Another central focus area will be to properly identify and manage intellectual property rights.

Hence, it is important to strive for a good quality of data and to ascertain that its use is not restricted by law, contracts, or other reasons.

Personal data

AI functionality could also rely on and include the usage of personal data. As several AI-systems most likely will be based and trained on personal data and personal data will in many cases be processed either as an in- or output, it is vital to consider that the GDPR nevertheless is applicable. As such, it is necessary to comply with relevant rules in the GDPR when processing personal data in connection with use of AI.

Responsibility

Another important topic to address relates to responsibility. As the future AI Act will impose requirements on actors within the AI lifecycle, which differ depending on the role of the actor, it is important to identify which role you will have (user, provider, importer etc.).

Risk, due diligence, insurance, and governance

To be able to mitigate risks, it is important to have good and in-depth knowledge of the tools you use and their providers and to regulate relations and usage through adequate agreements. It is also important to assess whether future use of AI within your  business requires changes to your insurance policies. Furthermore, to be able to handle risks and comply with regulations continuously you will need the support of experienced personnel and a solid AI internal governance framework. The latter is vital as the usage of AI presently is combined with risks that need to be monitored, assessed, and handled in a structured and efficient manner.

Summary

To summarize, it can be stated that although giant steps have been taken to regulate AI in a more comprehensive manner, the legal landscape will continue to be fragmentized and challenging. In addition, AI will not only presuppose adherence to the legal requirements, but also actions both internally and externally such as vendor due diligence, internal governance and actions relating to cyber-, IT- and information security. Hence, it is advisable that actors wishing to initiate AI- activities adopts a comprehensive view documented in an AI-strategy on how to act within this area.

Although the coming AI Act will be generally applicable two years after its entry into force, it is essential to act proactively and analyse, prepare, and adapt your business and organisation to it to be at the forefront when the rules enter into force. It can also be noted that several requirements pertaining to AI will be presented and apply continuously, thus preceding the date when the AI Act enters into to full force. This will occur by regulatory and voluntary steps initiated by the Commission.

Cirio will be closely monitoring the development of the AI Act and wants to be part of your AI- journey so please do not hesitate to get in contact with us if you wish to learn more.

Further reading

“Commission welcomes political agreement on Artificial Intelligence Act”

“Council and Parliament strike a deal on the first rules for AI in the world”

“Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI”

For more information, please contact:

Kontakt

Relaterade nyheter

Ursäkta, vi kunde inte hitta några inlägg. Försök med en annan sökning.