Dodgy Algorithms and Self-Representation: Is AI Good for Law?

Alexandra Thacker, Publications Editor and Contributor of the UCL Student Pro Bono Committee

I was first alerted to the potential problems with algorithms in 2020, when A level results appeared to be unfairly decided by an algorithm which resulted in almost 40% of students receiving lower grades than they had anticipated. After protest from students, the Government quickly about-turned on the matter, with Boris Johnson condemning the ‘mutant algorithm’ and instead awarding students teacher-predicted grades.

At the time I had never much considered the fact that government used algorithms such as these to make important decisions about peoples’ lives. In fact, there are more than 40 automated decision makings systems (ADMs) used in public decision-making, including police use of facial recognition, universal credit decisions and for the detection of ‘sham marriages’. These systems are aimed at increasing efficiency and fairness, however, like the A-level algorithm, there are certainly issues with them. And yet, AI certainly has the potential to be used for good, even perhaps increasing access to justice. It appears if the aim is to increase fairness, we as humans need to be fairer too.

The Sham Marriage Algorithm

In February Public Law Project (PLP) launched a legal challenge over an algorithm used by the Home Office to detect potential ‘sham marriages’, concerned that the algorithm could be discriminatory.

The detection system, which the Home Office began using in 2019, is applied to marriage applications where one party is not a British or Irish citizen, and lacks settled status or valid visa. The algorithm assesses the couples against eight ‘risk factors’, including age difference and shared travel history, but the full list of factors is undisclosed. If this assessment is ‘failed’, the couple are referred for further investigation, and are burdened with the task of proving their relationship is genuine, not a marriage contracted solely for the purpose of avoiding immigration law.

The main concern raised by PLP is that the algorithm appears to indirectly discriminate on the basis of nationality, with parties from Greece, Bulgaria, Romania and Albania being disproportionately targeted for investigation, despite there being no evidence these nationalities are more likely to be involved in sham marriages. And where there is a discriminatory impact, this must be justified by the Home Office, which has not happened.

In addition to these potentially discriminatory outcomes, PLP are also concerned with the operation of the algorithm more generally. One key issue is transparency. The risk factors used by the algorithm have not been disclosed by the Home Office, breaching transparency rules under GDPR and making it very difficult for people to first understand the reasons why their marriage application is being investigated, and then to challenge that decision if they wish to.

Indeed, according to Child Poverty Action Group, when officials of the Department of Work and Pensions were asked to explain how a claimant’s universal credit had been calculated in a system that involved AI, they were often unable to do so. In the case of detecting sham marriages, officials’ lack of understanding is perhaps evidenced by the fact that, according to PLP, there is not always a human review of cases that trigger further investigation. It appears here that too much responsibility is placed on the algorithm to make important decisions, without proper manual oversight.

PLP’s Legal Director Ariane Adam noted that although new technology ‘can achieve greater efficiency, accuracy, and fairness in Government decision making’, if algorithms possess biases in the way humans do by utilising biased human-made information, ‘then all they are doing is making prejudicial and unfair decisions faster than humans can.’ In addition, not only is the algorithm potentially making unfair decisions, there is also the added concern of a lack of accountability. The AI cannot be held to account for its poor decision-making in the way a person can, again making challenges to for wrongful decisions harder for applicants.

AI for Self-Representation?

We should be careful not to completely discredit the use of AI in law however. It is undeniable that over the years digitisation has increased efficiency for workers and students massively, and though it’s not perfect, AI could potentially help to fill an increasing problem in the legal landscape today, the issue of self-representation.

In England and Wales, individuals are entitled to represent themselves in court, if they so choose. Some may do so to avoid legal costs, while increasingly many others are forced to self-represent where legal aid is unavailable and private representation unaffordable. This obviously leads to challenges, a layperson will likely find the legal process complex, lengthy and often intimidating. As well as the process being emotionally and intellectually tough for the self-representing individual, it creates an imbalance of power and knowledge. If a self- represented party is up against a party with a lawyer, it is very difficult for the self- represented party to create and present arguments with the same level of skill and conviction as the lawyer.

Judges are also put in a difficult position: how can they judge a case fairly when one party has clearly not presented the best case they could have had they had a lawyer to do it for them? Where there is a potentially stark knowledge imbalance and possible high likelihoods of unfairness in these types of cases, perhaps AI would be a useful tool to help bridge this knowledge disparity.

This possibility is being considered in Canada, where specially trained generative AI technology could help litigants to find and gather court cases, draft documents and understand next steps, thus improving the quality of their case and increasing likelihood of success. This assistance would also save litigants time, perhaps making them more likely to engage in the self-representation process, rather than giving up on their case when they cannot afford a lawyer.

There are still reasons to be cautious, however. Over-reliance on AI for creating cases has a troubling precedent, in May two American lawyers submitted seemingly legitimate cases to court that were in fact entirely fictional, the citations invented by ChatGPT. These issues would likely only be exacerbated in the hands of self-representing litigants, who have no prior legal knowledge to assess the veracity of AI output.

What next?

While the possibility for AI to increase efficiency and fairness is most definitely real, we must ensure we use it in a way that is transparent and rejects the authority of past and prejudicial decision making. If it does not, we are simply left with human error and bias on an exaggerated scale, with the added worry of a lack of reasoning and accountability.

Disclaimer: The views and opinions expressed are those of the authors. They are for informational purposes only and do not necessarily reflect the official policy or position of UCL SPBC. 

Leave a Reply

Your email address will not be published. Required fields are marked *