Posted: 17/09/2024
Our experts, partner Rebecca Dziobon, and senior associates Lee Henderson and Eleanor Moodey, consider the recent developments in the production of convincing deepfake material, and the potential implications this could have on the family law sphere.
Developments in tech and artificial intelligence are happening at a dizzyingly fast pace. As legal professionals, we are keen to understand what is being developed and how it is being deployed in litigation.
Stories emerge every week regarding the use of tech by manipulators, often political, seeking to cross the line between truth and lies, using the personas of those that we trust, or who have influence over the public, to manipulate our opinions and therefore our decision making. In one recent example, the US authorities had to issue cease and desist orders to two corporations that initiated a robocall campaign which sought to persuade up to 25,000 voters not to vote in the New Hampshire primary election.
As lawyers, we deploy evidence to persuade judges to make decisions that have profound implications for our clients, their families, and their businesses. It is the veracity of that evidence that is the subject of this article. As litigators, we need to have an enhanced understanding of the technology that can create and manipulate evidence (documents, video and audio), how it may manifest in our cases, how to spot it, and the steps to take to challenge it.
Much of what we read in the press is about tech laying waste to lawyerly tasks (cue scenes similar to the Lehman Brothers collapse in 2008, with solicitors carting their belongings home in cardboard boxes), or litigants in person relying on AI chatbots to provide the legal research/argument upon which they then rely on in court.
But what we are starting to see on the ground is a rise in sophisticated fakery cropping up as evidence. The implications, if judicial decisions are made based on misleading or downright fake evidence, are terrifying. The use of fake and deepfake evidence can undermine the integrity of the court process.
In the criminal courts, a person’s liberty can be taken away on the basis of deepfake images, audio, or video, that create a false narrative. In the family context, audio or video can suggest seriously disturbing, often sexually deviant, behaviour seeming to incriminate a parent so that contact with children is prohibited. In civil litigation, compensation can be won for a ravaged reputation, but only months or years after the damage has been wrought, and it is often too late for the individuals and families involved.
The lengths that some litigants will go to make their case should not be underestimated. In May 2024, Mr Justice Mellor delivered a meticulous 230 page judgment (Crypto Open Patent Alliance v Craig Steven Wright) demolishing the claims made by Dr Craig Wright that he was Satoshi Nakamoto, the creator of Bitcoin.
During the six-week trial, Mr Wright had produced 47 forged documents in support of his impersonation, through which he sought to wield power over the cryptocurrency ecosystem by claiming to own IP in the building blocks of cryptocurrencies. Mr Justice Mellor concluded that:
‘Dr Wright’s… [actions]… represent a most serious abuse of this Court’s process…It is clear that Dr Wright engaged in the deliberate production of false documents to support false claims and use the Courts as a vehicle for fraud. Despite acknowledging in this Trial that a few documents were inauthentic (generally blamed on others), he steadfastly refused to acknowledge any of the forged documents. Instead, he lied repeatedly and extensively in his attempts to deflect the allegations of forgery.’
Expert analysis of the forged documents revealed that they were written using software that was not available in 2008 – when they were claimed to have been produced – and that they contained ‘anachronistic metadata’ created by Wright when he tried to manipulate the formatting to resemble Satoshi’s seminal 2008 white paper which launched Bitcoin. Mr Justice Mellor has now referred the papers in the matter to the CPS to consider whether a prosecution should be commenced against Dr Wright for wholesale perjury and forgery of documents[1].
Personal litigation, such as family proceedings, is particularly prone to the use of altered evidence as the stakes are so high. There are many reported instances when audacious attempts to manipulate the truth have been unearthed, often to hide wealth in a divorce context. Such practices occur despite the fact that deliberately or recklessly misleading the court can result in a fine or even a prison sentence for being in contempt of court.
One of the most well-publicised examples in the family sphere is the huge money case of Akhmedova v Akhmedov, with which readers of the tabloids will be familiar. This was a bitterly fought international divorce played out over several years in five jurisdictions, in which the Russian husband (worth in excess of £1 billion) sought to put assets – including his £250 million superyacht – out of reach of the wife. He also produced forged documents as ‘evidence’ to suggest a divorce had already taken place in Russia. The wife was awarded £453 million, but has only received around 10% of that to date as enforcement has proven elusive.
Another recent family law case example is the case of X v Y [2022] EWFC 95, in which the husband wanted to move to London (from his home country) and tried to convince his wife that this would benefit them financially. The court found that, on the balance of probabilities, the husband had dishonestly and falsely manufactured the bank statements during the marriage in order to persuade the wife to relocate to England.
The wife did not want to move, but was persuaded to do so on the basis of his indication that a company wanted to buy his business for £80 million. The husband showed her bank statements purporting to evidence a down payment of £8 million. Once in England, the husband ceased paying the rent on the family home and the children’s school fees, leaving the wife in a dire financial situation where she was living on state support and physically unwell. In another finance case (Vispute v Vispute 2023), the wife altered bank statements and submitted them as part of her financial disclosure in order to deceive the court as to her true income.
In a case where we were acting for the wife, the husband created false bank statements to downplay his financial position. His interference with the HSBC statements was unearthed by viewing the documents’ properties, as well as noticing that the documentation purported to relate to ‘31 September’; a rookie error. In 2022, a different litigant in divorce proceedings misled the court by editing three house valuations from estate agents and submitting them as evidence. In both of these cases, the husbands were given a custodial sentence.
Whilst the presentation of forgeries or misleading evidence to the court is nothing new, as the technology develops, our ability to spot it, decipher it, or even be aware of the possibility of certain types of fraudulent evidence is still only in its infancy. Most practitioners in litigation will have experienced ‘simple’ forgeries such as the above: faked signatures, amended bank statements, or selectively edited WhatsApp messages. These are usually picked out through inconsistencies in oral evidence (if the matter goes to trial), contrary evidence, via the metadata of documents, or with input from forensic experts.
But we are now seeing more sophisticated fakes seeking to undermine the credibility or character of the litigants themselves. In a 2020 case, a mother sought to persuade the court that the father was violent and dangerous by producing a manipulated audio recording which suggested the father had made threats towards the mother. The audio clip had been heavily doctored by the mother using online software and tutorials. The tampering was only discovered by examining the metadata of the original recording.
The consequences for litigants who seek to mislead the court can be severe, from contempt of court (leading to a fine or prison), to other criminal offences. Following reform of the Online Safety Act, new laws have been introduced which mean that the creation of doctored, sexually explicit images of adults is an offence. Any adult creating such deepfakes can face prosecution and an unlimited fine. However, consequences can only follow where the fakery is uncovered.
The judiciary has been provided with guidance on AI. This guidance reminds solicitors and barristers that all legal representatives are responsible for the material they put before the courts, and have a professional obligation to ensure it is accurate and appropriate. The judicial guidance goes on to explain how the use of AI to produce legal research and case citations may be prone to error, and gives some practical tips to spot features that show work may have been produced by AI; for example:
Open source generative AI chatbots are already being used by many litigants in person to produce their written submissions. The errors and inconsistencies in the output from these tech tools is usually easy to spot as a legally trained professional, but they will undoubtedly become more and more sophisticated over time. As a result, judges will need specialist training and to be persuaded that expert evidence should be obtained to verify the authenticity and reliability of evidence put before the court.
Clients should ensure that their legal representatives are aware of the potential for evidence to be manipulated or created using technology, that they have experience of dealing with these issues, and that they have trusted experts on speed dial to assist when forensic investigation is required.
Clients should also be prepared for their lawyers to probe the authenticity of any documents, audio, video or communications such as WhatsApp messages that they provide in support of their case. The lawyers are responsible for ensuring that any evidence put to the court is accurate.
If authenticity is in doubt, it will be necessary to use experts to provide input and to potentially decide upon the credibility of evidence at an interim hearing, or extend the final hearing to deal with this as a preliminary issue before the court. There has not yet been any guidance about this.
It should be noted that the court needs to give permission for an expert (usually a single joint expert between the parties) to be appointed in family law cases. The aim of the court rules (the FPR 2010) is to reduce the number of experts involved to reduce delay and cost. Any expert evidence must be ‘necessary’. This means somewhere between indispensable on one hand and useful, reasonable or desirable on the other (Re H-L (A Child) [2013] EWCA Civ 655).
We are seeing an increase in the number of digital forensic experts in cases to trace digital or crypto assets such as in DH v RH [2024] EWFC 79. The President of the Family Division has issued helpful guidance to explain the principles the court applies when considering whether to authorise or admit expert evidence.
There is currently no guidance in the Family Procedure Rules or Civil Procedure Rules to provide a framework, or any guidance for evaluating deepfake evidence in court proceedings. In the international sphere, a collaborative group from the US and UK has recently produced a guide for judges and fact-finders to evaluate digital open source imagery, which provides practical steps that can be taken to interrogate the authenticity of such evidence (here).
Can we turn to more technology to help us protect our clients against fake evidence produced by litigants using technology? There are a growing range of technologies that can be used in the defence against fake evidence, for example:
However, with many of these technologies (which will all require financial investment), there is likely to be the requirement for considerable training and even specific job roles in the future that would be needed to deal with the increase in falsified evidence. This is crucial to consider when utilising technology in the defence against falsified evidence, as there is always room for error in terms of both the human interaction with the technology, and in the technology itself. The infamous Post Office scandal and the terrible consequences arising from that, including the flaws in the Horizon software itself, is a very real reminder that technology is not foolproof, and just how difficult it is to prove that.
Email Rebecca
+44 (0)20 7457 3127
Email Lee
+44 (0)20 7753 7850
Email Eleanor
+44 (0)1865 813691