News and Publications

Guarding your reputation: defending against AI-driven defamation

Posted: 10/10/2024


The brave new (digital) world in which we find ourselves is an ever-changing technological landscape. The way in which we consume media, utilise and rely upon technology, means – for better or worse – that our digital footprint is seemingly ever growing. Going ‘viral’ or being ‘cancelled’ online are relatively new turns of phrase that have now become commonplace in our day-to-day vocabulary, both of which represent the evolving ways in which our reputation – both online and offline – is vulnerable to the often-unforgiving realms of social and other online media. 

A growing technological trend that seems to be embedding itself into our everyday lives in a variety of ways is the use of artificial intelligence, or ‘AI’. ‘Generative AI’ is a type of artificial intelligence that creates new content – such as text, images, or music – by learning patterns from large datasets. It mimics human creativity, producing convincing outputs like realistic text or images based on the data it has been trained on. 

For its many advantages, such technological innovation does carry significant risk. As generative AI models become increasingly sophisticated, the risks and potential pitfalls grow. The potential for fake news, doctored images and the rather concerning emergence of ‘deepfakes’ are but a few examples of how one’s reputation is potentially vulnerable to attack by the outputs that generative AI platforms might create. When such information can be so quickly and easily disseminated to potentially millions of people across the world, the scope for this to cause irreparable damage to reputations is huge. 

Warren Buffet famously said that it takes 20 years to build a reputation and five minutes to ruin it. How do the increasing developments in generative AI impact that? When false information can so quickly and easily be spread to potentially millions of people across the world, the scope for this to cause irreparable damage to reputations is huge – and it may take less than Warren’s five minutes! 

So, should we be worried? Well, it would be remiss when examining the intersection of generative AI and reputation management not to ask AI its thoughts. Having asked ChatGPT to give a short summary on the challenges generative AI poses to protecting reputation online, it said: 

‘Generative AI poses several challenges to protecting reputation online, including the creation and spread of deepfakes and misinformation, which can rapidly damage reputations. It can also be used maliciously to exploit weaknesses in public images, overwhelm crisis management efforts with false narratives, and generate insincere or flawed personalised communications. These challenges necessitate vigilant monitoring and proactive strategies to mitigate potential harms.’

That sums up the issues pretty nicely… but it would perhaps be more helpful to examine how that might play out in the real world. Using a fictional individual (created by the authors, not generative AI!), and focusing on the publication of false information, this article considers just how potent these new technologies could be when things go wrong and the ways in which we might tackle the challenges they pose to managing one’s reputation.  

(Temporary) downfall of an Olympic hopeful 

Taylor Conway is a semi-professional athlete, currently training to compete in an international gymnastics competition, with future aspirations of making the Team GB squad for the Los Angeles Olympics in 2028. She trains four days a week alongside her full-time job in finance. She has built up a modest but engaged number of social media followers who praise her for her candid posts of the struggles she faces with juggling training, a full-time job, raising two young children and some recent mental health challenges she has overcome. She recently won an award for inspirational women in sport – one of her proudest achievements – which helped secure a lucrative sponsorship deal with a well-known sports brand, meaning that she will finally be able to reduce her working hours and focus more on her training. 

Taylor has publicly spoken in the past about issues facing women in sport – including sexism, inequality and, sadly, female athletes being the victim of sexual assault. There are numerous video clips, posts and photos of her online addressing these issues and talking about historic allegations of sexual assault. In the weeks preceding the competition in Liverpool, social media is awash with posts about the competition. A local online blogger, Liam Smith, is writing an article about the competition and uses a generative AI platform to do some research on the athletes competing this year. The AI platform that Liam uses outputs a statement that says Taylor Conway was accused of sexual assault in 2021 – however, this is completely baseless and wrong. 

Despite the AI platform providing generic warnings about users checking the accuracy of its content, seeing this as a potential ‘big scoop’ for his blog, Liam fails to do any independent research on the defamatory allegation. He does not approach Taylor for comment before preparing his blog and proceeds to write a blog post which he publishes on his website overnight. He then posts a link to his blog on social media, which fairly quickly gains some significant traction online. 

Taylor wakes up to hundreds of direct messages on social media from concerned followers, and a number of missed calls from friends. She also receives emails from journalists from local news outlets asking for her comment on the allegations. Before she has a chance to address them, the backlash on social media has already started to gain momentum and she feels like the situation is spiralling. Angry online followers/those who are critical of Taylor demand that she hands back her award for inspirational women in sport and call for the boycott of her attending the competition. 

Taylor’s recent sponsor is made aware of the allegations after a number of social media users tag them in the various posts about her online. In response, they pull out of the sponsorship deal and publish a statement to that effect, ‘pending investigation’ of the very serious allegations that have surfaced online. Taylor quickly seeks to put out a statement strongly denying the allegations in an attempt to vindicate her reputation, but in the midst of the online madness, she feels that she has no choice but to pull out of the upcoming competition whilst these matters are addressed.

Addressing the fallout

While an admittedly extreme example, the sequence of events is not outside the realms of possibility. It demonstrates our potential vulnerabilities to inaccurate and defamatory content produced by generative AI, and how matters can snowball when it is then shared in an online forum. But what can Taylor do? Does the current law provide adequate protection for those on the receiving end of false and defamatory allegations fabricated by AI? 

The short answer is no. While AI may have developed at a fairly rapid pace in recent years, the law has not. The arsenal of legal rights and remedies at Taylor’s disposal are the same for those who are the subject of more traditional defamatory publications, such as newspaper articles. The law does not provide enhanced protections for false and defamatory content produced by generative AI. 

One of the principal ways by which Taylor might seek to vindicate her reputation is by pursuing a libel claim. In order to have a viable claim, Taylor must show that the allegation published about her is false, that the allegation is defamatory of her, and that it has caused or is likely to cause serious harm to her reputation. But who would she sue – the generative AI platform, Liam Smith or the deluge of people online who shared Liam’s blog/social media posts? 

It would not be the first time a claim has been brought against a generative AI platform - ChatGPT’s owner, OpenAI, is currently being sued in the US by Mark Walters, a radio host and founder of Armed American Radio, after ChatGPT produced the text of a fabricated legal complaint accusing him of embezzling money. But, whilst the generative AI platform will likely be regarded as both an ‘author’ and ‘publisher’ of the original allegation, as might Liam Smith, anyone who republished its defamatory statement about Taylor could be a potential defendant to a libel claim. 

Other tools in the armoury?

As the use of generative AI develops, so too will the ways in which victims of damaging content aim to protect their positions. While a defamation claim might be the primary way in which Taylor seeks to vindicate her reputation, there could be other mechanisms by which the inaccurate statement can be challenged; for example, data protection law. 

The current UK data protection regime is set out in the UK General Data Protection Regulation (UK GDPR), tailored by the Data Protection Act 2018 (DPA). Generative AI platforms may well be caught by the provisions of the UK GDPR, and would then need to comply with their obligations under both this and the DPA. In Taylor’s case, where there has been the publication of an inaccurate statement about her, this would breach her data protection rights. Action might therefore be taken against the AI platform (as a data controller) to erase the inaccurate personal data that it processes about Taylor. 

However, such data protection rights and remedies are tools only available to individuals. 

Prevention is better than cure

Having considered how to address concerns ‘after the event’, how can individuals and businesses mitigate the potential impact of content produced by generative AI? There are a couple of pro-active strategies that could be adopted: 

  • Monitoring digital footprint

Forewarned is forearmed. An important part of protecting your reputation is to stay aware of what content exists about you and how you are portrayed online. Regularly monitoring a digital footprint – including social media platforms, news outlets or other digital channels, can assist in quickly identifying any false or damaging content that may be circulating online as it arises. 

This can be achieved through fairly basic methods, starting with search engines, but there are other tools available to dive deeper. In this context, AI can be your friend rather than your foe, with some AI-powered platforms being able to help track online mentions, flag suspicious content, and identify patterns in the spread of misinformation – the aim being to rectify any information quickly before false allegations or misinformation goes viral. 

  • Crisis management plan

Fail to prepare, prepare to fail. Having a robust crisis management plan in place in case concerning content is published online will hold you in good stead. Such plans are particularly useful for those who are at a higher risk of their reputations being attacked online – such as high-profile individuals, high-net-worth individuals, and corporate entities. 

A crisis management plan should outline the steps to be taken if and when a false information crisis hits. The starting point will be an analysis of the misinformation and an assessment of the risks it poses. Other key steps to be included in the plan might be: to appoint a single point of contact/spokesperson; the preparation and publication of public statements; a strategy for the management of key relationships; a strategy for using online/other digital platforms to correct the record; and engaging specialist legal experts who will be ready to advise. 

What does the future hold?

As generative AI continues to advance, so too does the risk of reputational damage caused by incorrect and fabricated content being produced and circulated online. 

AI platforms, users and consumers alike should remain vigilant and recognise the growing risks posed by the potential spread of misinformation. However, in this increasingly digital world, that is perhaps easier said than done. The true impact of these technologies and the scale of the potential disputes that might arise from them remains to be seen – but watch this (digital) space.

Beyond the code podcast: the legalities of AI

From privacy concerns and data protection to intellectual property rights and algorithmic accountability, our podcasts aim to guide you through the legal challenges and opportunities in this rapidly advancing AI-driven era. Whether you're a seasoned tech executive, a legal professional, or simply curious about the interplay between AI and the law, our podcast series is tailor-made to inform, provoke thought, and inspire meaningful conversations about the future of innovation and compliance.

Listen to the podcast here.


Arrow GIFReturn to news headlines

Penningtons Manches Cooper LLP

Penningtons Manches Cooper LLP is a limited liability partnership registered in England and Wales with registered number OC311575 and is authorised and regulated by the Solicitors Regulation Authority under number 419867.

Penningtons Manches Cooper LLP