Posted: 16/02/2024
In January 2023, Getty Images – a ‘pre-eminent global visual content creator and market place’ – commenced legal proceedings in the English High Court against Stability AI, an open-source generative artificial intelligence (AI) company. Getty is alleging infringement of its IP rights in relation to Stability’s deep learning AI model that automatically generates images (known as ‘Stable Diffusion’).
Stability sought to have two aspects of Getty’s claim against it struck out at an interim hearing. Mrs Justice Smith, however, found in Getty’s favour; she did not strike out any aspects of its claim, and instead she questioned the ‘credibility’ of evidence submitted by Stability.
Getty served particulars of claim on Stability on 12 May 2023. Broadly speaking, Getty alleges that Stability has ‘scraped’ millions of images from various websites operated by Getty without its consent, and unlawfully used those images to train and develop Stable Diffusion. It is also alleged that the output of Stable Diffusion (synthetic images that can be accessed by users in the UK) infringes intellectual property rights by reproducing substantial parts of works in which copyright subsists and/or bears a UK registered trade mark.
Stability applied for reverse summary judgment and/or strike out in respect of two aspects of Getty’s claims: (1) the training and development claim; and (2) the secondary infringement of copyright claim (both detailed below).
The application, along with an application made by Getty to amend its particulars of claim, was heard before Mrs Justice Smith on 31 October 2023 and 1 November 2023, and judgment was handed down on 1 December 2023.
The primary issue to be decided by the court was whether it should throw out Getty’s claim against Stability alleging that Stability’s use of Getty’s images to train and develop Stable Diffusion amounts to copyright infringement under the Copyright Designs and Patents Act 1988 (CDPA) in the UK.
There was a substantial amount of evidence served by both parties to the proceedings in relation to the application as a whole, but most of the evidence concentrated on the question of whether any of the training and development carried out on Stable Diffusion took place in the UK.
Mr Emad Mostaque, founder and chief executive officer of Stability AI, told the court that ‘I am confident that no Stability employee based in the UK has ever worked on developing or training Stable Diffusion.’ Mr Mostaque, along with other witnesses for Stability, also claimed Stability’s computing power was accessed entirely via the US.
Stability’s overall position was that the evidence clearly demonstrates that any training and development of Stable Diffusion took place outside of the UK. It argued that, as a result, there is no infringement of copyright under the CDPA and so Getty’s claim against Stability had no real prospect of success – it was bound to fail.
Mrs Justice Smith, however, was not entirely convinced. While she acknowledged that the evidence before the court supported a finding that, on the balance of probabilities, no training or development of Stable Diffusion took place in the UK, there was insufficient evidence to grant summary judgment on the point. An interim application hearing is not a trial and Mrs Justice Smith was not satisfied that Getty’s claim against Stability relating to the training and development of Stable Diffusion in the UK had no real prospect of success.
Mrs Justice Smith also accepted Getty’s submissions that Mr Mostaque’s previous media appearances and YouTube interviews appeared to contradict the contents of his evidence.
Both Getty and Stability will now be subject to the disclosure process, which will involve disclosing the ‘precise nature of the development and training of Stable Diffusion’.
The High Court also refused to throw out Getty’s other claim against Stability, which alleges that Stability has unlawfully imported into, possessed and/or dealt with an ‘article’, namely the pre-trained Stable Diffusion software, in the UK. The judge said that this claim ‘really stands or falls’ on the true interpretation of the word ‘article’ in sections 22, 23 and 27 of the CDPA; in particular, whether sections 22 and 23 of the CDPA are limited to dealings in ‘articles’ which are physical and tangible items only, or whether ‘articles’ also encompasses intangible items (such as making available software).
Stability argued that the true interpretation of sections 22, 23 and 27 of the CDPA as set out above can only be that an article is something that is a physical and tangible item, rather than something intangible like Stable Diffusion. On this basis, it argued, there is no real prospect of Getty succeeding in a claim of secondary infringement.
Again, Mrs Justice Smith was not convinced, and she refused to dismiss Getty’s claim. She explained that arguments surrounding the true interpretation of ‘article’ under the CDPA raise ‘a novel question, not definitively determined previously, which would be better resolved once all the facts have been ascertained at trial’. As such, the meaning of ‘article’ under the CDPA should be considered at trial rather than in a ruling based on ‘the necessarily abbreviated and hypothetical basis of pleadings or assumed facts’ raised in the summary proceedings.
The outcome of Getty’s claim against Stability, and other similar cases, will have far-reaching implications. The legal boundaries associated with generative AI are continually being established, particularly as the court considers novel issues and determines new precedent.
This raises uncertainty for all players in the AI landscape, from publishers and artists to AI companies and users, who continue to learn and adapt to the evolving legal landscape. The secondary infringement claim is particularly interesting for UK developers and UK copyright holders alike. If the term ‘article’ is held by the courts as not applying to intangible items like Stability’s AI technology, we may see developers ‘jurisdiction shopping’ to train AI models in more favourable legislative landscapes around the globe.
Developers may be able to train their AI solutions using copyright protected material outside of the UK (where they are subject to less restrictive AI regulation or even legislative exemptions), before introducing AI solutions for use in the UK where their software will not be deemed an ‘article’ for the purposes of sections 22, 23 and 27 of the CDPA. The outcome of this case on this particular limb may have wide-ranging consequences for the industry more broadly.
Earlier this month, Valve – the developers behind the popular gaming platform, Steam – implemented new rules to govern the use of AI content in games. Developers seeking to publish games on Steam are now required to disclose when a game contains pre-generated AI content and promise that it is not ‘illegal or infringing’. They will also need to disclose when a game contains AI content that is generated while a game is running, and detail the safety measures put in place to prevent the generation of unlawful content. In addition, players will be made aware if a game uses AI generated content and are encouraged to report any illegal material.
In a recent blog post, Valve confirmed that these new rules will be revisited as it ‘continue[s] to learn from the games being submitted to Steam, and the legal progress around AI, and will revisit this decision when necessary’.
Email Charlotte
+44 (0)20 7457 3107
Email Charlotte
+44 (0)20 7457 3119
Email Tom
+44 (0)20 7457 3186
Email Harriet
+44 (0)20 7753 7901