ARTIFICIAL INTELLIGENCE, JUSTICE AND THE HUMAN FACTOR

*

by Danilo Cilia and Alessandro Marzagalli

Will the ongoing technological revolution lead to the disappearance of the legal professions?

This is the question posed by the large-scale use of Artificial Intelligence, endowed with human capabilities such as reasoning, learning, planning and creativity.

It is true that the massive use of these systems to date only concerns ‘weak’ AI (i.e., that which is programmed to solve problems ‘as if’ it had a human brain, and which therefore requires the presence of a human being to perform its tasks); and that for ‘strong’ AI, capable of fully replicating the human being, we will have to wait a little longer (not much, it is said, however).

But do not be misled by the adjective ‘weak’. Indeed, the challenges posed by the use of these systems are already enormous, even in the legal sphere.

It is undeniable that AI is already capable of performing many of the tasks previously reserved for the human professional, but with enormously greater efficiency: from the management of deadlines and communications, to the storage and management of data, from case law research to the drafting of court documents.

Let’s be honest: for the same result, how many law firms will still be willing to invest in human collaborators when they can use machines and thus save enormous amounts of time and money?

Probably none, in the long run. And, indeed, the competitiveness of a law firm will be measured, in the coming months, by its ability to make intelligent use of these new… intelligences.

Yet, the indiscriminate use of AI could have unexpected implications, and lead to a different ending from the one that many, today, hasten to write.

First of all, the best lawyers will still be, not only the best prepared, but also the wisest, the shrewdest, the best at ‘defending’. The ability to empathise with the client’s problem, while maintaining the right detachment to suggest the best way forward, will remain a more important quality than the hard skills shared with machines, such as encyclopaedic knowledge of rules and judgments.

Moreover, the lawyer today has a duty of confidentiality (Art. 28 of the Code of Legal Ethics) which, we bet, he will also maintain in the future: privacy will continue to be a primary value, which clients will continue to demand and thus be willing to pay for. The lawyer who will be able to continue to guarantee the client this value – even through a limited and skilful use of machines – might have, paradoxically, a greater appeal than (only) high-tech colleagues. From this point of view, indiscriminately feeding sensitive data to AI – especially LLM (Large Language Systems) – may not be a winning choice for the lawyer. Not only from a deontological point of view, but also from an economic one.

Especially since it is already becoming established in the world of the professions that the lawyer cannot make a profit on the time taken by the machine to produce a result that he will then sell back to the client. We can charge for our time, not that of the machine.

Fairness (Art. 6 CDF) will therefore impose transparent disclosure of ‘artificially’ generated results: the guidelines for the use of AI, published last June by the Federation of European Lawyers (FBE – Fédération des Barreaux d’Europe), followed recently by those of the San Francisco Bar Association (‘Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law’ – State Bar of California, published on 16 November 2023) go in this direction.

For the practising and aspiring lawyer, moreover, the mastery of these technologies will undoubtedly constitute an element of distinction in the market, but it will also conceal a serious pitfall: that of losing the dominus as master, and having to go ‘to school’ by machines. With a curious and paradoxical reversal of roles. The teacher has always been the one who, by educating the learner, succeeds in ‘drawing out’ from him (educare from e-ducere), the truth, through the experience of life (non scholae sed vitae discimus). Soon, in the studio, the opposite might happen, with the learner ‘pulling out’ knowledge from the teacher: the trainee will pull answers out of the machine, and in so doing – paradoxically – educate it. The learner instructs the teacher. And all outside of real life.

But, you can bet, AI will also lead to inevitable changes within the judicial offices.

For the prosecution, the main theme will be that of predictive justice (masterfully recounted by Spielberg in Minority Report). The ability to predict the commission of a crime will make it possible to prevent it: with what risks, for the fundamental principles that inspire the criminal trial of a democratic society (the presumption of innocence, above all), it is easy to imagine. In this field, moreover, the future is now: in the United States there are already, in fact, the first experiments of algorithms for assessing the social dangerousness and recidivism of the accused.

From the judge’s perspective, finally, the question that many are already asking is: will AI ever be applied to formulate judgements and write sentences? Again, the elimination of the human factor will predictably generate contradictory effects.

On the one hand, the machine-judge would finally become immune to external conditioning, with the realisation of the Enlightenment myth of the ‘mouth of the law’ judge. Decisions would be impervious to those factors that have always conditioned human judgement: one thinks of the considerably higher probability of obtaining favourable judgements early in the morning, after the lunch break and on a judge’s birthday [1]. All this, with AI, would be gone.

On the other hand, however, the automation of decisions would inevitably lead to their dehumanisation and progressive detachment from reality, with less and less room for appeals (what will we have to correct, if the machines are no longer wrong?) and for jurisprudential evolutions resulting from social changes.

The judgement, then, as a pre-envisaged act, the result of a mere mathematical calculation? It seems to us that the nature of today’s algorithmic tools is not, or not yet, properly compatible with the performance of the typical intellectual activity of the judge, such as the evaluation of behaviour, the interpretation of norms and the drafting of sentences: the accuracy of these machines, based on a machine-learning model, is totally dependent on the type of data on which the programme is trained. With the consequence that, where the AI will not come to know, it will give the statistically most probable answer, even by inventing it: just like what happened to that lawyer in New York, who was charged by the judge with having presented in court a document produced through the use of an AI system, and containing the citation of no less than six bogus sentences.

It is true: with machines there would be no more room for the upside-down justice of the judge-monkey, who in Collodi’s ‘Pinocchio’ was so moved by the unfortunate puppet, and by the fraud he had been subjected to with the gold coins in the Campo de’ Miracoli, that he was… thrown in jail. But it is also true that the judge-IA would no longer have any empathy, neither for the defendant nor for the person offended by the crime (and we dare not imagine how much he would have for the lawyer and his harangues!).

And what would remain, then, of that fundamental function of justice, which history has taught us consists (also) in the collective sublimation of compassion?

 

[1] Data and analysis from the study by DANZIGER, J. LEVAV & L. AVNAIM-PESSO, entitled Extraneous Factors in Judicial Decisions, Proceedings of the National Academy of Sciences (2011).

*Picture Designed by Freepik (www.freepik.com)