This is an independent guest post written by Amanda Chaboryk. Amanda currently works as the Head of Legal Data and Systems at a consulting firm and is a board member of a charity. She focuses on the intersection of law and data science, with a particular interest in emerging technologies, such as generative AI.
Introduction
The legal field has continuously seen an increasing and necessary demand for innovation to address long standing issues surrounding access to justice. AI undeniably plays a large part in this and holds the potential to profoundly transform society – influencing everything from how we access information to the way companies and government agencies (to name a few) make critical decisions about us. By leveraging AI technologies, there is great promise to further streamline legal processes, enhancing the efficiency of legal services and providing more equitable access to legal resources. This paper will cover the benefits of leveraging AI (namely generative AI) in legal practice to facilitate access to justice and the associated regulatory regimes for liability, risks, and considerations. As AI continues to evolve, its integration into the legal field will continue to drive further advancements and reshape the landscape of justice.
Governments around the world are increasingly leveraging AI to enhance various public services and streamline operations. For instance, in the UK, the Public Sector Fraud Authority (PSFA) uses AI-powered tools, such as the Single Network Analytics Platform (SNAP), to detect and prevent fraud in public spending. The justice system is no exception. Police forces have used AI-enabled facial recognition technology to enhance their surveillance capabilities and support decision-making. Just a year ago, the HM Judiciary released detailed guidance on the application of AI within the courts and tribunals, designed to help judicial office holders responsibly incorporate AI into their duties whilst safeguarding the integrity of the justice system. The potential advantages of using AI in the judiciary are tremendous – enhancing efficiency, consistency, accessibility, optimising legal research, predictive analysis, and cost reduction. This thereby shows promise to make legal services more accessible and affordable for all. It is important to note however that there are also material risks to accompany the promise, such as considerations around algorithmic transparency and bias. It is therefore crucial to acknowledge that while AI presents significant opportunities, it also introduces several risks that must be carefully managed. This also necessitates an important discussion on the current landscape of access to justice.
At a high level, major factors that impact access to justice and prevent individuals from effectively navigating the legal system are financial constraints, legal complexity, language barriers, and lack of legal knowledge. Monica Bay, Fellow of Stanford Law School CodeX, rightly said at Legalweek 2017 that “as computational technology and artificial intelligence matures, more people will be able to have better access to justice.” Whilst technology has the potential to facilitate access to justice it is by no means a standalone solution. Its success hinges on an evolving innovation strategy consisting of technological innovation, improved data management, stakeholder cooperation, and system coordination. As technology continues to evolve and shape access to justice, it is essential to recognise that the definition of access itself varies significantly depending on individual circumstances and specific legal needs.
Access to justice notably has different meanings to different individuals, based on their unique circumstances. This can include, but is not limited to, an understanding of the process to address a legal problem (largely connected to legal knowledge), affordability of representation, and comprehension of legal rights or court processes. For many, legal representation has become almost a luxury rather than a necessity due to the rising costs of legal fees. According to Forbes, as of 2024, the average UK salary is £36,000 before tax, based on the mean gross weekly wage of £896 (including bonuses). Consider an individual who has been unlawfully dismissed and wishes to bring a claim to the employment tribunal. The average cost of employment legal fees per hour in the UK typically ranges from £200 up to over £800. The precise rate depends on several factors, including the solicitor’s experience (PQE), the complexity of the case, the firm, and the geographical location of the practice. In cities such as London, rates are naturally on the higher end. Preliminary steps include an initial consultation, determining legal issues, reviewing the employment contract, and reviewing core evidential documentation. These steps can take 10-20 hours, costing £2,500 – £5,000. Given the average UK salary of £36,000, this represents approximately 7-14% of annual income, highlighting the significant financial burden of legal representation relative to the average income.
Vulnerable groups, such as low-income individuals or non-native speakers, may face additional barriers, making access to justice even more complex, contingent on their specific needs and challenges. This leads to the key question, as to how you define access to justice, which also has a strong relation with the rule of law. For example, the United Nations details that “access to justice is the basic principle of the rule of law, [and that] [i]n the absence of access to justice, people are unable to have their voice heard, exercise their rights, challenge discrimination or hold decision-makers accountable.” The relationship between access to justice and the rule of law is fundamental, as access to justice ensures that individuals can defend their rights and hold others accountable under the law. The rule of law, which upholds equality before the law, protection of rights, and limits on state power, cannot function properly if people cannot access legal remedies. Furthermore, access to justice, as defined in the “Legal needs of individuals in England and Wales report”, is the ability of individuals to engage with the legal system effectively, ensuring they understand their rights, responsibilities and available remedies. Access to justice, also involves understanding the legal process, having the confidence to engage with it, and accessing the resources required to navigate the system. The legal needs report highlights that key barriers to access to justice include financial constraints, low legal confidence, limited accessibility to justice, and a lack of legal self-efficacy. Recognising the critical needs to enhance access to justice is essential to explore innovative solutions that can effectively address these challenges. However, innovation’s role in facilitating access to justice is not solely about new technology –it’s also about the rethinking (and redesigning) around processes, services, and technological delivery. As technology vastly evolves and adapts –organisations need to also, with a strong focus on bespoke solutions to individual’s unique legal needs. For technology to have the maximum impact in facilitating access to justice, it needs to be designed with the user in mind. This is grounded in legal technology simplifying the process for individuals –supporting them in understanding their legal rights and how to access them.
Role of technology
Prior to exploring how emerging technologies can facilitate access to justice, it’s crucial to identify barriers that hinder the adoption of technology within the legal sector. The “Technology, access to justice and the rule of law report” report outlines several obstacles to technology adoption, including confusion and fragmentation around current technologies, unequal resources among providers, challenges with digital and legal literacy, insufficient funding, and regulatory concerns. Digital and data literacy is essential to AI literacy, as understanding technology’s underlying infrastructure is vital for effective engagement. The EU AI Act notably addressed AI literacy within Article 4, emphasising that provider and deployers of AI systems must ensure an adequate level of AI literacy among their staff and those involved in operating or using AI systems on their behalf. The Act goes on to stress that AI literacy is crucial for comprehending both the opportunities and risks of AI (as well as for compliance with the Act’s regulations on ethical and safe AI deployment. Given that data (and AI literacy) is vital to increase adoption and now contained within legislation, user training and understanding is more important than ever.
Liability in the Context of AI-Provided Legal Advice
There are significant questions raised in the context of how liability works in AI systems augmenting and generating legal advice. This naturally begs the question, what should be done when an intelligent autonomous machine makes an error that leads to damage or harm? Holding someone or something responsible is often necessary for obtaining compensation for damages caused, and this responsibility is crucial for attributing blame for wrongful conduct, whether it involves human or AI systems. The “Liability for Artificial Intelligence and Other Emerging Digital Technologies” report (2019), compiled by the Expert Group on Liability and New Technologies, highlighted that whilst existing liability regimes in EU Member States provide basic protection for victims, the unique attributes of AI technologies present challenges in ensuring effective victim compensation. The report underscored the need for updated legal frameworks to address these complexities and enhance legal clarity and fairness for those affected by AI-related incidents. Notably, this report was compiled prior to the mainstream usage of generative AI in 2022, indicating a natural need for updated analysis to address the new and complex challenges posed by generative AI technologies.
To understand how liability might operate in this context, it’s essential to consider recent legal initiatives and EU laws regarding AI usage. Recent legal initiatives, such as the EU AI Act (as above), GDPR, and the Product Liability Directive, offer a framework addressing these issues. The EU AI Act categorises AI systems by risk and imposes stringent requirements on high-risk systems, including those providing legal advice. The act mandates that AI providers ensure their systems are safe, transparent, and accountable. GDPR plays a crucial role in ensuring data privacy and protections, requiring AI systems that process personal data to handle it lawfully, transparently, and securely. The Product Liability Directive is a revised EU legislation that aims to modernise the framework for product liability, making it easier for consumers to seek compensation for damages caused by defective products that can cause harm. Accordingly, liability for AI-provided legal advice could fall on various parties. AI providers could be held liable if their systems fail to comply with relevant regulations or operate unsafely. Legal professionals or organisations using AI systems must understand the system’s limitations and verify the advice given, as they could naturally be held liable if they solely rely on AI without proper oversight. Verification, cross-referencing and source checking are already helpful processes deeply embedded into the legal profession and lawyers. These practices ensure that legal advice and ultimately outputs are accurate, reliable, and based on the most current and relevant information. Lawyers routinely verify facts, cross-reference legal precedents, and check some sources to uphold the integrity of their work. This meticulous approach is of course crucial in maintaining the trust and confidence of clients and the legal system as a whole. An example of AI liability arises in competition law, where a significant challenge occurs when AI systems used by sellers independently develop anti-competitive behaviours without explicit programming to do so. Regulators are still working to create frameworks to tackle this issue, as traditional competition laws may not fully address the complexities introduced by AI-driven market dynamics. Currently, there is no definitive solution, but ongoing efforts aim to ensure fair competition and prevent AI-facilitated market abuses. An insightful article compiled by law firm Squire Patton Boggs covers these issues in granular level detail.
To further enhance the reliability and accountability of AI systems in legal contexts, explainable AI (XAI) emerges as a critical component. Explainable AI (XAI) involves methods that allow human experts to understand how and why an algorithm arrived at its decision, contrasting with “black box” machine learning models that lack transparency. This is crucial for ensuring accountability and reliability in various industries, including legal contexts. In terms of practical application with an illustrative example, some legal AI platforms have knowledge cut-off dates, so if a solicitor were drafting advice that involved up-to-date legislation, they would naturally leverage their subject matter expertise to ensure this is accommodated and reflected in the advice. This speaks to the importance of supplier due diligence and consideration of the quite prevalent limited warranties offered by providers contained in their terms of usage.
With the advent and increased accessibility of generative AI, countless individuals are now empowered to receive preliminary legal guidance and advice. This advancement arguably facilitates self-representation (particularly in non-complex situations) and a preliminary start on their legal journey. This is due to the fact that these platforms can provide information on legal rights, draft basic legal documents, and even offer insights into potential legal actions. The Law Society details this eloquently in their report on generative AI, citing that the increased accessibility, affordability, and sophistication of these tools have enabled widespread and public adoption. Whilst making legal information more accessible and affordable, GenAI can help bridge the gap for those who might be unable to seek legal assistance, but these are naturally risks and limitations. In the context of accuracy and reliability, AI systems may not always provide accurate or up to date legal advice as cited above. Many platforms also have knowledge cut-off times, which are important to look out for. Legal professionals are trained to interpret and apply the law, considering the nuances and complexities that AI might naturally miss. Legal issues often conversely often require personalised advice catered to an individual’s specific circumstances, which AI may not be able to provide adequately. In acknowledging GenAI as a valuable tool, it cannot and will not replace the expertise and judgement of legal professionals and instead should serve as an enabler. Through combining the strengths of AI with the expertise of legal professionals there is promise of a more accessible, efficient, and equitable legal system. This necessitates striking a balance between leveraging technological advancements and maintaining the integrity and reliability of legal services.
It is crucial to strike a delicate balance between the risks and rewards when utilising legal technology. While GenAI tools are trained on vast datasets and can provide accurate information, they are not infallible. This underscores the importance of maintaining a human in the loop and ensuring thorough verification. Overreliance on any legal technology can lead to individuals missing out on the nuanced advice and advocacy that only human legal professionals can currently provide. This highlights the underlying fact that, in the vast majority of cases, even with the advent of Legal AI Agents, the expertise of legal professionals remains indispensable. Legal professionals bring years of experience, empathy, ethics, and the ability to navigate the intricacies of the legal system. Therefore, while legal Subject Matter Experts (SMEs) powered by GenAI can be extremely powerful, it is essential to recognise and acknowledge the irreplaceable value that human legal professionals contribute.
Case Studies
Various legal services providers have been successful in overcoming barriers to access to justice by centring their technology solutions on the user’s needs. Illustrative examples include the utilisation of mobile apps, user friendly case management systems, and chatbots to optimise the delivery of services. An illustrative example of a legal service utilising chatbots to facilitate access to justice is Access Social Care. The charity provides free legal advice to individuals with social care needs, with the objective of helping people understand and assert their legal rights in social care, improving their quality of life. One of their innovative tools is AccessAva, a 24/7 AI-driven chatbot which supports users through answering social care questions surrounding health and social care issues across England and directing users to the appropriate resources of legal help. It supports adults, families, and carers by providing personalised legal information, guidance, or pre-written letters to help resolve social care challenges. By using AI, AccessAva adapts and learns from user interactions, ensuring a person-centred approach while delivering high-quality, lawyer-reviewed assistance for social care needs. Applications like AccessAva show immense promise for AI to facilitate quicker and more accessible assistance in navigating complex social care issues, thereby facilitating access to justice.
Generative AI
Generative AI, specifically, can play a transformative role in improving access to justice by addressing key challenges that individuals face in navigating the legal system. It can expand the reach of tools like AccessAva by enabling the development of more personalised and intelligent legal assistance solutions. Generative AI refers to technology that can create new content—such as text, images, or even legal documents—based on patterns learned from large datasets. It uses advanced algorithms to produce outputs that resemble human-created work, enabling more dynamic and personalised interactions, like generating tailored legal advice in response to user queries. Through AI-driven natural language processing, these systems can dynamically generate tailored legal advice, simulate real-time interactions, and continuously learn from user inputs. This adaptability enhances access to justice by making legal guidance more widely available, reducing costs, and providing faster responses to complex legal inquiries, especially for underserved populations. It is crucial to note that large language models (LLMs) produce text based on statistical patterns from their training data rather than logical reasoning. This underscores the necessity for subject matter expert (SME) review and verification. Consequently, although the outputs may sound highly confident and coherent, they can still be inaccurate. This is of note for individuals without access to professional advice, who might mistakenly trust these outputs. However, models like Copilot provide references to the sources, which supports the verification process and helps ensure the accuracy of the information provided. Direct pieces of lengthy legislation can also be uploaded on to the models, which can break down complex legal language into digestible terms.
Generative AI can also be immensely helpful in streamlining administrative tasks through automated content generation and document drafting. This includes the production of first drafts of legal documents, such as contracts, advice, and even court submissions. In the context of document review, generative AI is very effective at extraction beyond keyword searches. By streamlining these processes, generative AI can reduce costs, save time, and enhance accuracy, thereby allowing legal professionals to focus on higher-value tasks that require human decision-making and leveraging experience. This technology can democratise access to legal resources, making it possible for individuals who cannot afford traditional legal services to have a starting point. This starting point can be inputting various pieces of legislation or asking Copilot for sourced definitions of a housing law term, such as “eviction notice.” From there, users can look up resources for local authorities, seek guidance on tenant rights, and get help drafting responses to landlords. Generative AI can also aid in finding templates for legal documents and providing step-by-step guidance on how to fill them out, ensuring that individuals can effectively advocate for themselves even without professional legal representation.
Simplifying Legal Complexities with Generative AI
As use cases continue to develop, Generative AI (GenAI) can simplify legal complexities, making them more digestible and accessible. Large language models (LLMs) are the foundation of generative AI, enabling it to create text based on patterns learned from vast datasets. These models process and generate human-like language, allowing GenAI to produce contextually relevant content such as legal advice, communications, or articles. Fine-tuning a generative AI model involves adjusting a pre-trained model on a specific dataset to improve its performance and relevance for particular tasks or domains. By the creation of fine-tuned models, LLMs can offer personalised, easy-to-understand explanations of legal processes and documents, making them accessible to individuals with limited legal knowledge or language barriers. Fine-tuning generative AI for legal areas can ensure precise, contextually relevant advice, enhancing accessibility, efficiency, and accuracy in legal consultations and documentation.
One example of how a government agency in law enforcement could benefit from generative AI is in investigative report writing and documentation. Law enforcement agencies often handle extensive paperwork and detailed reports that are crucial for legal proceedings. By fine-tuning generative AI models, agencies can automate the creation of these reports, ensuring they are thorough, accurate, and contextually relevant. For instance, when investigating a crime, AI can help officers generate initial drafts of incident reports, witness statements, and evidence summaries based on data input from the field. This can significantly reduce the time officers spend on administrative tasks, allowing them to focus more on investigative work and community engagement. Moreover, AI can ensure that legal terminologies and processes are clearly explained, making the documents easier to understand for all parties involved, including those with limited legal knowledge or language barriers. However, it’s important to note that fine-tuning generative AI models is an expensive process that requires a range of subject matter experts (SMEs) and developers to support. Whilst this would be an expensive endeavour, the benefits would likely outweigh the associated costs.
Addressing Common Legal Problems with Fine-Tuned Models
The “2023 Legal Needs Survey Report” highlighted that the most common legal problem types are related to employment, finance, welfare, and benefits. A fine-tuned employment law model can provide personalised, easy-to-understand explanations of legal processes and documents, making them accessible to individuals with limited legal knowledge and language barriers. This model can cover areas such as employee rights, employer obligations, workplace discrimination, and termination procedures, offering clear guidance and actionable advice. By leveraging LLMs, the model can generate contextually relevant content, ensuring that users receive accurate and comprehensible information tailored to their specific employment law queries. This can lead to more efficient and focused consultations with legal professionals, saving time and potentially reducing legal costs. Moreover, a fine-tuned employment model can be highly beneficial for legal centres, enhancing accessibility, efficiency, and resource optimization. It can provide clear, consistent legal information, helping clients with limited knowledge and language barriers. The model can also streamline the intake process, allowing legal professionals to focus on complex issues and triage cases more effectively. Additionally, the model can serve as an educational tool, empowering clients to understand their rights and obligations under employment law.
Building on the benefits of fine-tuned employment law models, another promising use case is the development of integrated legal decision support systems, which aim to support legal decision-making comprehensively. Another promising use case is the development of integrated legal decision support systems, with the objective to create systems that support legal decision-making. These systems are a combination of the strengths of generative AI with other AI tools, such as text retrieval and hyperlinking in order to provide comprehensive legal support. Advantages of legal decision-support systems include handling the complexity of legal information by ensuring that all relevant statutory provisions are considered, reducing the risk of oversight. These systems can also be sustained by legal professionals, to support maintenance and ensure up-to-date developments are incorporated. This would be particularly helpful for free legal advice services, such as community legal centres, who often operate with limited resources and are highly reliant on volunteers with the relevant skills.
To ensure that individuals have access to up-to-date and reliable GenAI, acknowledging the high costs of paywalled systems, there are several strategies that can be deployed. Platforms like the paid versions of GPT (“GPT Plus”) and Perplexity Pro, which are typically around $20 per month, offer an affordable entry point for individuals who want to start experimenting with legal content. These platforms provide a cost-effective way for users to access advanced AI capabilities without the needs for significant financial investment. It’s important to flag as well that GenAI platforms are treated like foundational tools that support the efficient and effective practice of law. As an infrastructure to process data, the quality and relevance of the inputs are crucial. One of the key inputs that help ensure accurate and high-quality results from GenAI systems is the use of structured and up-to-date legal precedents. By providing GenAI systems with well-organised and current precedents, users can enhance the AI’s ability to generate relevant and accurate legal content.
Leveraging technology and establishing strategic partnerships can enhance the accessibility and reliability of legal services. Open-source models are software programs that are made freely available to the public, allowing anyone to see, use, modify, and share the code. Open-source alternative can be continuously updated by a community of developers, ensuring they remain current and reliable. The level of openness can foster an inclusive and equitable technology ecosystem, allowing smaller firms and individual practitioners to access high-quality tools without the prohibitive costs that can be associated with proprietary software. Governments and non-profit organizations can fund and provide access to generative AI tools to supplement legal delivery services. Collaborations between AI developers and universities can lead to the creation of free or low-cost AI tools tailored for legal research. An example of this, is the University of Oxford’s Law Faculty which received a £1.2 million grant to explore the potential and limitations of AI in legal services, an interdisciplinary project that involves the collaboration of private sector organisations to develop education and training packages for lawyers and programmers. Another illustrative example is Yale Law School’s DocProject, a program of the Media Freedom and Information Access (MFIA) clinic, which provides pro bono legal representation for documentary filmmakers. This project aims to provide legal support to documentary filmmakers who lack access to resources and to educate future media lawyers, demonstrative of an excellent use of AI to facilitate access to justice.
Conclusion
Generative AI is revolutionising the legal landscape by breaking down barriers and democratising access to justice. It empowers individuals with essential knowledge and resources. While the implementation of generative AI in law faces challenges, such as ensuring accuracy and addressing ethical concerns like algorithmic bias, these hurdles are part of any significant advancement. Balancing the benefits of AI with the integrity of the legal profession is crucial. By developing integrated legal decision-support systems and fostering collaboration between free legal advice services and legal information institutes, we can create a more just and equitable legal system. The future of legal access is here, powered by AI. While AI is not perfect, neither are humans. Generative AI’s potential to automate and enhance legal processes can lead to significant improvements in efficiency, accuracy, and accessibility. By making legal resources more understandable and available to the public, we can bridge the gap between legal knowledge and those who need it most. Despite the initial investment and ongoing challenges, the long-term benefits of AI in law are profound, offering a future where legal services are more accessible, affordable, and equitable for all.