Conference: ‘The regulation of Artificial Intelligence in Europe: structural differences between the European Union Regulations and the Council of Europe Framework Convention’, by Ángel Presno Linera
Ángel Presno Linera
Professor of Constitutional law. University of Oviedo
In 2024, and very close in time, finished the Regulation (UE) 2024/1689 of the European Parliament and the Council, of 13 June 2024, which stablish the rules on Artificial Intelligence and modifies several Regulations and Directives (Regulation on Artificial Intelligence, RAI), as well as the Council of Europe Framework Convention on artificial intelligence and human rights, democracy, and the rule of law (from now on CETS), that was opened for signature during the conference of Council of Europe Ministers of Justice in Vilnius on 5 September 2024 and that has already been signed by Andorra, Georgia, Iceland, Norway, the Republic of Moldova, San Marino, the United Kingdom as well as Israel, the United States of America and the European Union.
During this process, each institution was aware of what the others were doing; in fact, the Council of Europe Ministers decided to allow the inclusion of the European Union (EU) in the negotiations, which was represented by the European Commission, including representatives of the European Union Agency for Fundamental Rights (FRA) and the European Data Protection Supervisor (EDPS) in its delegation. Even the article 27.2 of the Framework Convention foresee that the Parts which are members of the European Union will apply on their mutual relationships the rules of the European Union that regulate the areas in the application of the Convention sector without harm on their object and purpose and on their full application with the other Parts.
In the following lines, the most relevant structural differences between both norms (which, in the end, complement each other) will be commented; before that, just as a reminder, the RAI as well as the CETS have adopted a regulatory perspective based on the risks that the AI systems can generate, all that not forgetting the objective of assuring a high level of protection of health, security and fundamental rights. For this same goal are working the ones considered principles of basic action in the areas of: supervision, equality and no discrimination, privacy and personal data protection, reliability and transparency.
Well then, first, we must highlight the fact that the area of application of the RAI is regional, the EU territory, even if it creates a ‘Brussels effect’ (the emulation outside of the EU of the European regulations as a consequence of the market mechanisms), while the CETS was already born with a certain global calling. In its Preamble there is a need for establishing, with priority, a legal framework that applies on a global scale to stablish general principles and common norms that rule the activities on the life cycle of the Artificial Intelligence systems, effectively preserving shared values and making the most of the benefits of Artificial Intelligence to promote those values in a way that favours responsible innovation and the article 30.1. The Convention will most likely be signed by the Member States of the Council of Europe, the no Member States that have participated in its creation and the European Union.
Second, the RAI, as per the article 288 of the Treaty on the functioning of the European Union, ‘it will be mandatory and directly applicable in each Member State’; the Framework Convention will be mandatory as soon as the States decide to include themselves in it. According to this feature, the recipients of the CETS are the States, while the RAI addresses suppliers and those in charge of deployment.
Third, the RAI includes an extensive regulation (113 articles and 8 annexes) and very exhaustive, with very detailed precepts, while the CETS is composed of a much less extensive set of articles (36) and, of course, less detailed. The Convention highlights its ‘Framework character..., that could be complemented with other instruments to address specific questions related to activities in the cycle of life of the AI systems’ (No 11 of the Preamble).
Fourth, the RAI, generally, has more norms. It includes precise behaviours of what can or cannot one do with AI; on the other hand, the CETS, takes an approach based on principles for its configuration, which means that it includes optimization commands since they can be fulfilled on different degrees. For example, the Regulation stablishes a series of AI practices that will be forbidden (article 5) and imposes obligations which must stick to the high risk systems and the AI model suppliers for general use (article 51).
Furthermore, The Framework Convention says (article 4) that “each Part will have to take measures to guarantee that the activities in the cycle of life of the Artificial Intelligence systems are compatible with the obligations of protection of human rights, stablished in the applicable international right and on its national legislation”.
Fifth, and also generally but not every time and in line with what is stablished in the previous point, the RAI imposes media and result obligations while the CETS includes, essentially, result obligations leaving the specification of the adequate measures to achieve them to the States. For example, the Regulation foresees that the high risk AI systems will be designed and developed in a way that they can be effectively monitored by physical people during their period of use, which includes giving them appropriate human-machine interface (article 14.1); in the future, it is stablished that the suppliers of the high risk AI systems will who consider or suspect that a high risk AI system introduced in the market does not work as the Regulation stablishes will adopt immediate corrective measures to fix it, withdraw it from the market or deactivate it, following instructions (article 20.1).
Furthermore, the Framework Convention says (article 1.1 and 1.2) that ‘their objective is to ensure that the activities that take place during the cycle of life of the Artificial Intelligence systems are fully compatible with human rights, democracy and the Rule of Law. Each part will adopt or maintain legislative or administrative measures (or others) to make the stablished dispositions effective in the present agreement.
This measures will be graduated and differentiated by the functions of gravity and probability of side effects to the human rights, democracy and Rule of Law along the cycle of life of the Artificial Intelligence systems’; in addition, to mention another scenario, as it is stablished in the article 5.1 each part will adopt or maintain measures to ensure the Artificial Intelligence
Systems can’t be used to undermine the integrity, independence and effectiveness of the democratic institutions and processes, including the principle of separation of powers, the respect to judicial independence and the access to justice. 2. Each part will adopt or maintain measures which main goal is to protect their democratic processes in the context of the activities in the cycle of life of the Artificial Intelligence systems, the equitable access and participation of people in public debate included, and even their capability to create their own opinions freely.
Finally, the RIA has a penalty system: ‘the Member States will stablish a penalty system and other measures such as warnings and other non pecuniary measures, applicable to the infractions mentioned in the Regulation made by the workers and will adopt the necessary measures to ensure that they are implemented as they should... The penalties will take effect immediately, will be proportional and dissuasive (article 99.1). Next, the article specifies the amount of the administrative fines, which are important (thus, the non compliance of the rules regarding the prohibition of the AI practices mentioned in the article 5 will be penalised with an administrative fine of €35,000,000 max or, in the offender is a company, of until the 7 % of its total global business volume of the last financial exercise, if this amount were superior).
Furthermore, the Framework Convention only remarks that ‘each part will stablish or designate one or more effective mechanisms to supervise the compliance of the obligations stablished by the Convention’ (article 26.1). However, as it is a treaty, depending on the rules of the national legal system on the appliance of international rights, the national courts could state the violations of certain articles. For many signatories, this means that the courts or other authorities will have to determine if the rules of the CETS are precise enough to be considered autofeasible. In any way, the CETS will probably function as an interpretative document for the European Court of Human Rights (ECtHR), which is not mentioned per se in the text of the CETS, where it is foresee a ‘Conference of the parts’ as a dispute solution mechanism (article 23).
In conclusion, it is well known that Europe is far behind the United States and China in AI research, development and innovation; with the regulations seen earlier Europe wants to assume, at least, the leadership of the AI legal regulation through the new rules analysed before. It is a great try for harmonization and construction of a standard and minimum framework for Europe, that reaches the world. We will pay attention to the results.

Professor of Constitutional law. University of Oviedo