Lawyers and AI hallucinations

We believe such guidelines in Malta until now are not needed, as the judiciary does not need to be coddled in the way it approaches its work and is fully capable of handling AI properly

File photo
File photo

“The use of artificial intelligence must be accompanied by the application of actual intelligence in its execution.”

Last April, in Willis vs US Bank Trust National Association, a Texan court issued a standing order pertaining to the use and reliance of AI by lawyers. A standing order can be compared to a judicial decree however has an erga omnes effect, thus having wide-ranging ramifications.

In an eloquent standing order wherein an array of jurisprudence was referred to, the court explained how legal research has developed over time, starting from the use of digest books to the use of online databases to now the emergence of AI. It explained that it is not opposed to the use of AI recognising that when used rightly, “AI can be incredibly beneficial for attorneys and the public.”

However, it also outlined risks.

An imminent risk when using AI as a lawyer is the concept of AI hallucinations. These occur when an AI system generates fake sources of information, such as invented case names or fabricated case facts. The court also identified the cause for such hallucinations: “AI models are trained on data, and they learn to make predictions by finding patterns in the data. However, the accuracy of these predictions often depends on the quality and completeness of the training data. If the training data is incomplete, biased, or otherwise flawed, the AI model may learn incorrect patterns, leading to inaccurate predictions or hallucinations.”

To illustrate this point, the court referenced one of the first AI landmark cases, namely Wadsworth vs Walmart. In this case the court condemned three lawyers for citing fake cases generated by AI to the tune of $5,000. These lawyers worked at a small law firm with limited resources. They explained to the court that at the time their firm did not have access to online databases such as WestLaw or LexisNexis, which are very expensive, instead they subscribed to an inferior data base which did not grant them full access to federal cases.

In response to such concerns, the court’s standing order now mandates that any brief prepared with the assistance of generative AI must include a clear disclosure on the first page under the heading: ‘Use of Generative Artificial Intelligence’.

The court clarified that this must be provided for when the parties rely on AI without verifying the accuracy of the information provided.

Similarly, in Al Huron vs Qatar National Bank QPSC, heard in London last June, the claimants made 45-case law citations, 18 of which turned out to be AI hallucinations—lies. This misuse of AI to find such cases was heavily criticised. The court condemned this approach in light of the facts of this case wherein it wasn’t even the lawyers who prepared the briefs but their clients, remarking, “it is extraordinary that the lawyer was relying on the client for the accuracy of their legal research, rather than the other way around.”

Moreover, last April, the most senior judges in the UK, including the Lady Chief Justice and the Master of the Rolls, updated the judicial guidance on AI, just after two years from when the first guidance document was released. This guidance is intended to assist the English judiciary in identifying the use of AI. While it does not explicitly bar judges from using AI, it emphasises that any such use must be carried out responsibly, it inter alia provides, “if clerks, judicial assistants, or other staff are using AI tools in the course of their work for you, you should discuss it with them to ensure they are using such tools appropriately and taking steps to mitigate any risks.”

We believe such guidelines in Malta until now are not needed, as the judiciary does not need to be coddled in the way it approaches its work and is fully capable of handling AI properly.

However, similar guidance or procedural rules for lawyers would be a step in the right direction. In fact, the authors have been faced with clients who claim to have done their own “research” asking whether what they found on ChatGPT is true or useful.

Hence, it is not far-fetched to imagine an AI hallucinations case occurring in Malta. Such cases are emerging across the developed world, including in the US, UK, Australia, New Zealand, and Canada. This is a global phenomenon that will inevitably reach Maltese shores, and it would be prudent to anticipate this issue, sparing potential embarrassment before our courts.