University of Valencia logo Logo Open Government, Participation and Open Data Chair (PAGODA) Logo del portal

Conference: “Artificial Intelligence in Justice? Yes, but with legal and constitutional guarantees that we do not have today”, by Lorenzo Cotino Hueso

  • October 7th, 2024
 

Conference: “Artificial Intelligence in Justice? “Yes, but with legal and constitutional guarantees that we do not have today”

Lorenzo Cotino Hueso
Full University Professor of Constitutional law in the Universitat de València. Valgrai. Head of the Observatory of transformation in the public sector

Is the future of artificial intelligence in justice a need or a risk?

Artificial intelligente (AI) is transforming a lot of sectors and justice is not an exception. However, its use in this field generates enthusiasm, as well as concern. Proof of that is the very interesting film “Artificial Justice”. It is important to tackle the key question about how can AI be used in the jurisdictional context and especially for only jurisdictional use, not so much to hesitate about collateral matters of management, that have a lot of interest to expedite and improve the functioning of our justice. Regarding the jurisdictional use, the respect of the constitutional guarantees and the necessary supervision are essential to assure that its implementation is effective and respectful with rights and guarantees. It is a reflection that I had the opportunity to think more about recently in the magazine Actualidad Jurídica Iberoamericana, nº 21, 2024

 

Why do we need AI in the jurisdictional field?

The use of automatised systems and artificial inteligence in the judicial fiels binds big opportunities. In spite of the reticence of many jurists, such as the report recently led by Martínez Garay with Amnesty International, the truth is that the AI systems are present in more than a hundred initiatives in Europe, the majority in administrative management, and countries like China have implemented them in justice. Particularly, China plans on make the use of AI in their courts mandatory for 2025. However, in regions like Ibero-America, with projects like the JUSLAB in Argentina or similar ones, more ambitious possibilities are in the horizon. 
However, in spite of the potential, the debate on the use of AI in jurisdictional decision-making remains open. It looks like in Europe, or at least in Spain, there are many who fear that these technologies will violate guarantees and even that they can get to replace human judges. In Spain the General Council of the Judiciary assures the jurisdictional authority is ontologically linked to the human nature and in this direction also goes the very interesting sentence of August 2024 or the Constitutional Court in Colombia. Other people, including me, point out the need to optimise the functioning of the judicial system and guarantee a greater efficiency against disruptive technologies. However, with guarantees.

 

 

Inteligencia artificial

Source: Image generated by Walle

 

Is the current regulatory framework enough?

The Regulations on Artificial Intelligence of the European Union (RAI) has classifies certain AI systems as “high risk”, being especially important the field of the so-called “application of the law” (police and criminal uses, border controls and similars), as well as specifically regarding justice. This high-risk systems the RAI mentions are directly jurisdictional, not accessory or of the Justice Administration, and they include tools to evaluate proof, predict risks or even write sentence drafts and judicial resolutions applying the law. Being of a high-risk entails the compliment of a whole series or guarantees to achieve an “accordance evaluation”: evaluation of risks, quality and governance of data, accuracy, precission, technical documentation, transparency, generation of registers, human supervision of cybersecurity. 
However, the Recital 63 of the RAI is clear when it warns about the RAI not constituting a juridical fundament to allow its use, except if the law of the States or the EU state so. That is, the RAI considers the possibility of the states using these systems, as long as the states regulate it, and if it is the case they will have to comply with the guarantees and requirements of the RAI compulsorily.

Which are the defficiences of the Royal Decree-law 6/2003?

The Royal Decree-law 6/2003 regulates the use of AI and automatised systems in Spain. This is particularly the rule that must provide legal coverage and regulate specific guarantees as the jurisprudence of the Constitutional Court demands, in accordance of the requirements of other European courts regarding the use of automatised systems.
However, this rule, is clearly not enough in this matter, in spite of its great advancements. The Decree-law mentions the orientation to data and the use of AI in this regard, the automatised action, among which there are the so-called “proactive” and also the “assisted” (intelligent drafts of whatever type of act to be validated by the judge, the public prosecutor or the lawyer), as well as the requirements they share. However, it is a group of generic mentions that does not get even close to reach the standards of a quality regulation that affects fundamental rights. So much that the General Council of the Judiciary considerates it as a blanket criminal law and in “cryptical” terms

What are the necessary guarantees for an adequate use of AI?

One of the most delicate matters in the use of AI in the jurisdictional field is the need to count on enough legal guarantees. It is not enough with the general dispositions of the RAI . A specific regulation that describes the specific guarantees that have to be applied in each case are required, adapting themselves to a degree of normative density and varying intensity of the procesal demands in the different types of jurisdictional procedure, and in the case of specific use that is being handled. 
In each context, the following must be modulated and regulated -with a legal minimum-: the guarantees of the RAI, databases, audits, subjects, units, responsible bodies, bodies in charge, processing method, use of output data, usage and conservation time, destruction, pseudonymisation and other technical and operative guarantees, hiring possibilities, own subcontracting and development, needs of resolution or validation acts or implementation, information, basic transparency and a long list of requirements. Likewise, following the model of Valencian regulation, a judicial algorythm register must be chosen. The standarisation the Technical Commitee of the Electronical Judicial Administration the 21 June 2014 for the use of artificial intelligence by the justice, no matter how positive it is, it does not cover the spaces that need to be regulated by rule with the force of law.

 

Inteligencia artificial

Source: Image generated by Walle

 

Is it possible to improve the current legislation?

The Royal Decree-law 6/2023, even though it introduces some advancement, is clearly not enough. The mentions of the use of algorythms and automatised systems are too general, and the legal authorisation is ambiguous. For example, the article 35 of the decree mentions the “orientation to data” and the application of AI techniques, but it does not define accurately the conditions in which AI can be used to support the jurisdictional function.
The project of the law that aims to widen these dispositions also lacks the needed transparency. Even though there are some proposals such as the use of AI for the anonymisation of judicial documents or the extraction of indicators of social vulnerability, the regulation is still not very thourough. A greater density of tje regulation is required, that thoroughly specifies how, when and for what these systems can be used, as well as the guarantees that are required in each case.

What awaits for us in the future of AI in justice?

Despite the challenges, it is undeniable that AI has the potential to transform the judicial system. However, its implementation should be made carefully and respecting the fundamental principles of the rule of law. The use of artificial intelligence in justice is not an option, but an opportunity and a need to assure a faster, more efficient, and more accessible justice. But with guarantees. 
In Spain, the Spanish Authority of Supervision or Artificial Intelligence (AESIA) is not even close to have the enough independence to play this role. The Spanish Agency of Data Protection (AEPD) does have it, but it does not have the capacity to control the Judicial Power, only the General Council of the Judiciary (CGPJ) has the independence and constitutional power to supervise the AI systems used in justices, at least the ones for only jurisdictional use. That is how i held it in the reviewed work and this is how it is assured
recently the CGPJ in June 2024. In Spain, neither the Spanish Agency of Data Protection (AEPD) nor the Spanish Authority of Supervision of Artificial Intelligence (AESIA) have enough independence to fulfil this function.
This way, we must support a better regulation and authorisation of the jurisdictional use of AI in justice, with enough guarantees. All of this with a mandatory use of artificial intelligence by the public power to improve the efficacy of the rights that I have been defending for years. 

 

What implications does the use of AI have in judicial decisions?

One of the most controversial aspects is the use of AI to write drafts of judicial resolutions. Although these drafts should be double-checked and validated by a human judge, the influence of AI in the final decision can be significative. This arises doubts about the jurisdictional exclusiveness of the judges and the right of citizens to have their case resolved by a human being.
The RAI classifies as “high risk” the systems that have an influence in judicial decision-making. This entails that all guarantees of the European regulation should be apply. Moreover, the article 22 of the General Data Protection Regulation states that citizens have the right not to be subjected to automatised decisions with not human intervention. Although in the case of drafts there is some human revision, more transparency is needed on how to implement these guarantees.

 


 

 

Lorenzo Cotino Huesco
Lorenzo Cotino Hueso

 

Full university professor of Constitutional Law at the Universitat de València (4 six-year term Aneca), Magistrate of the TSJ Valencian Community 2000-2019, chairperson of the Transparency Council of the Valencian Community since 2015. Doctor and Degree in Law (U. Valencia), Master in Fundamental Rights (ESADE, Barcelona), Licence and Diploma Holder in Advanced Studies in Political Science (UNED). Director of privacy and rights at OdiseIA. www.cotino.es