Parties’ use of Artificial Intelligence in the Family Courts: Risks, responsibilities and judicial guidance
Introduction: Artificial Intelligence in Family Court Proceedings
In the family courts, there has been a noticeable increase in the use of artificial intelligence (AI) tools by parties. Increasingly, parents who are unrepresented are turning to AI tools such as ChatGPT or other generative AI models to assist with drafting documents to be submitted to the court. This is a relatively new challenge for the family courts where AI has become the tool of choice for unrepresented parties, particularly where navigating emotionally difficult and legally complex disputes. As this issue becomes more prevalent, more cases will be reported on this issue.
AI use by litigants in person: Recent appellate authority
The recent Court of Appeal judgement in D (a child) (recusal) [2025] highlights how AI tools are being used by parties and the courts’ evolving stance on this issue. Here, a mother acting in person submitted an extensive skeleton argument setting out grounds of appeal relating to the recusal of a judge that was prepared with the assistance of AI. The document included a mixture of appropriate case references, misapplied authorities, and some citations that did not exist at all. The court recorded the mother acceded she had used AI to assist in her preparation, and that some of the erroneous authority citations resulted from this.
Judicial sympathy and limits on AI reliance
Lord Justice Baker, with whom Cobb LJ and Miles LJ agreed, articulated sympathy for unrepresented parents who do turn to AI for assistance. As Baker LJ observes, “It is entirely understandable that litigants in person should resort to artificial intelligence for help.” This recognition reflects the practical realities faced by many parents involved in family proceedings where professional legal assistance is unavailable. However, Baker LJ was equally clear that such sympathy does not diminish the responsibility borne by parties to ensure that all material placed before the court is accurate and reliable. At its most serious, reliance on AI-generated content containing ‘hallucinated’ or inaccurate legal authorities risks misleading the court and generating additional emotional and financial costs for all parties, as further judicial time must be spent scrutinising these submissions (Baker LJ, at para. 83).
Litigants in person and the post-LASPO landscape
From a policy perspective, this case illustrates the broader systemic issues and pressure on the family justice system in England and Wales. Access to legal aid remains restricted, with many parents being unable to afford private representation. For people facing an unfamiliar legal terrain concerning their children, it is unsurprising that parties seek assistance from accessible tools promising to simplify legal research and legal drafting. The growing reliance on technology to bridge these gaps reflects the post-LASPO landscape in which digital tools have increasingly been deployed to manage unmet legal need and facilitate access to justice, albeit often outside formal regulatory frameworks (Larkin A, ‘AI and Automation in Practice: Insights from Experience, Solutions to Client Pain Points and Adding “Value”’ (2025) 55 Family Law 1369.).
Procedural responsibility and accuracy in family proceedings
However, judicial sympathy sits alongside a clear caveat that litigants in person cannot simply absolve themselves of the responsibility to present accurate material to the court due to representing themselves. Procedural rules all place responsibilities on parties, legally trained or otherwise.
While AI tools can offer a measure of practical support for parties representing themselves, their use presents a real risk to procedural integrity where inaccurate or fabricated material is placed before the court.
Children’s welfare and the impact of AI-generated submissions
In family proceedings, these risks cannot be viewed in isolation from the welfare principle: delays and misdirected argument caused by flawed AI-generated documents can ultimately impact children’s welfare. Recent scholarship demonstrates AI is no longer a peripheral concern in family law, but one that increasingly intersects with safeguarding and children’s wellbeing.
The challenge for the courts is therefore not whether AI should be outrightly prohibited, but how its use should be managed, ensuring that technological assistance does not undermine the child-focused administration of justice.
Judicial responsibility: AI use and vigilance
The risks associated with AI use are clear where AI models produce plausible but fabricated authorities. These risks were acknowledged in the updating Judicial Guidance on Artificial Intelligence published in October 2025. The guidance warns judicial office holders of the phenomenon described as an “AI hallucination” and urges judges to remain vigilant when engaging with AI-generated material. The concerns raised in D (a child) reflect broader institutional efforts by the judiciary to address the implications of AI across the justice system, to understand common AI terminology, identify it, and assess the risks.
The future of AI regulation in the Family Courts
Although the guidance is directed at judicial office holders, it acknowledges the growing presence of AI-generated material within court proceedings. This raises further questions as to whether procedural reform or supplementary guidance will be required to address the use of AI by parties directly, particularly litigants in person.
As AI use becomes increasingly embedded in legal practice, the family courts must continue to develop proportionate responses that recognise both the realities of access to justice and the need to safeguard the reliability and fairness of the legal process.