Getty loses most of its UK Stable Diffusion case – and what that really means for AI training
Getty Images has largely lost its landmark UK lawsuit against Stability AI over the Stable Diffusion image generator, after the High Court in London rejected most of its claims and only found a narrow, historic trademark infringement around Getty watermarks in some AI generated images. The ruling is being spun by both sides as precedent setting, but the fine print matters if you care about how AI training data will be treated in future cases.
What the High Court actually decided
The case, heard at London’s High Court and decided by Justice Joanna Smith, was structured around several distinct claims.
Getty originally alleged that:
- Stability AI had used Getty’s stock image library to train Stable Diffusion without a licence.
- Stable Diffusion outputs reproduced Getty’s copyrighted images in a way that infringed copyright.
- Stability AI had infringed Getty’s trademarks by generating images with distorted versions of the Getty watermark and logo.
- Stability AI had committed secondary copyright infringement by importing into the UK an AI model that had been made in breach of Getty’s copyright elsewhere.
By the time of the ruling, that list had already shrunk. Mid trial, Getty dropped its direct copyright claim about training and output images, in large part because it could not prove where Stable Diffusion had been trained. That geographical uncertainty matters because UK copyright law is still very territorial in how it treats acts like copying and processing. Without clear evidence that training happened in the UK, the claim was on weak footing.
What remained in front of Justice Smith were:
- The trademark infringement claim, focused on AI generated images that included the Getty watermark.
- The secondary copyright infringement claim, based on the import of the Stable Diffusion model into the UK.
On those points, the outcome was mixed but mostly bad for Getty:
- Getty won in part on trademarks. The court agreed that AI generated images containing Getty’s watermark infringed its trademarks. However, Justice Smith described this as “historic and extremely limited in scope”, reflecting how early versions of the model behaved and how rare these outputs were in practice.
- Getty lost on secondary copyright infringement. The judge dismissed this claim, rejecting the idea that simply importing an AI model into the UK, even if it had been trained on infringing data elsewhere, automatically created secondary liability on the facts presented.
Why Getty dropped the core training claim
From Getty’s perspective, the heart of the case was always the allegation that Stability AI scraped its library at scale to train Stable Diffusion. That is the kind of claim that, if upheld, could force large AI companies to license stock imagery and other datasets directly.
The problem was evidential and jurisdictional.
- Stability AI has consistently argued that Stable Diffusion was trained on infrastructure outside the UK, primarily in US based data centres.
- UK copyright law requires that key acts like copying take place within the jurisdiction for liability to attach under UK statutes, unless specific extra territorial conditions are met.
- Getty could show that its images appear in the LAION dataset used to train Stable Diffusion, but that alone does not prove that the training pipeline operated within UK borders or under UK control.
Faced with those gaps, Getty’s legal team dropped the training and output reproduction claims mid trial. That move narrows the immediate impact of the decision for AI training as a whole, because the court did not have to resolve whether scraping and training on copyrighted images is itself infringing under UK law if it happens abroad.
In practical terms, the key takeaway for AI developers is that location and infrastructure choices remain a crucial part of legal strategy. Where models are trained and where code and datasets physically reside can determine which country’s copyright rules will apply and whether a claimant can bring a case at all.
What the trademark win really covers
Getty did secure a partial victory on trademarks. Early versions of Stable Diffusion could produce images that resembled watermarked stock photos and sometimes included a distorted Getty watermark. Getty argued that those outputs infringed its trademarks and risked confusing consumers about the origin and licensing of those images.
Justice Smith agreed that the inclusion of Getty’s marks in outputs can infringe trademark law in the right circumstances. That is the “historic” part of the ruling Getty is highlighting in its public statements. However, the judge also stressed that the finding is “extremely limited in scope”. In practice that means:
- The decision applies to a specific pattern of outputs produced by specific model versions at a specific point in time.
- If Stability AI has already changed or updated its models to avoid generating watermark like artefacts, ongoing exposure on this front may be minimal.
- Other models and companies will not automatically be caught unless they produce similar trademark like patterns in user facing outputs.
From a legal policy perspective, the trademark point is still important. It shows that courts are willing to treat AI models like any other product that can infringe marks if they generate confusing or misleading imagery. But it is not a general statement that all AI models using branded training data are in trouble.
The “intangible article” precedent and why Getty is still happy
Despite losing most of the concrete claims, Getty is already pointing to one part of the judgment as a strategic win. In its statement, the company highlighted that the ruling “established a powerful precedent that intangible articles, such as AI models, are subject to copyright infringement claims in the same way as tangible articles”.
This sounds abstract, but it matters for the company’s broader litigation strategy, especially in the United States.
Under UK law, certain provisions about secondary infringement refer to “articles” that embody copyrighted works. A recurring question in AI cases is whether a trained model, which is essentially a large set of numerical weights and code, counts as such an article or not.
Justice Smith’s reasoning accepts that AI models can fall within that concept. That does not mean every model is infringing. It does mean that, in principle, a model could be treated as an object that carries infringing material, not just as a purely abstract mathematical function.
Getty’s lawyers will likely cite that language in other courts, arguing that if a UK High Court judge is comfortable treating a model as an article for copyright purposes, other jurisdictions should take a similar view. The company has already flagged its parallel lawsuit against Stability AI in the US as a place where this precedent could be useful.
What this means for AI training datasets
For AI researchers and companies concerned about training data, the immediate effect of the ruling is more procedural than substantive.
Key points to note:
- The court did not decide whether training an AI model on copyrighted images without a licence is, in itself, infringement under UK law when done abroad. That question remains open and will be fought in other cases.
- The dismissal of the UK secondary infringement claim turns heavily on the specific facts and on how UK law treats importation and territorial acts, not on a blanket blessing of training on scraped content.
- The model as “article” reasoning creates more scope for future claimants to argue that distributing a trained model is itself a form of dealing in something that carries infringing content, if the training can be shown to be unlawful.
In other words, the decision is not a green light for unlicensed scraping in general. It is a reminder that plaintiffs still need detailed evidence about where and how models are trained, and that they may get further by targeting model distribution, trademarks and contractual angles alongside pure copyright.
How it fits into the wider Getty vs Stability AI fight
This UK case was one front in a multi jurisdiction battle.
- Getty has a separate lawsuit against Stability AI in the United States, filed in Delaware, alleging both copyright infringement and trademark misuse for AI outputs that mimic Getty’s watermark.
- The UK Competition and Markets Authority is also scrutinising a proposed merger between Getty and Shutterstock, which would create an even larger stock media provider with its own AI generation tools trained on licensed libraries.
- Other rights holders, including individual artists, have filed cases against Stability AI and similar companies in the US, some of which have already had claims narrowed or dismissed, but not completely resolved.
Within that context, Tuesday’s ruling is neither a complete loss nor a sweeping win for Getty. The company failed to secure damages or a ruling that Stable Diffusion’s core training was unlawful under UK law, but it gained language it can use elsewhere about models as infringing articles and about trademarks in AI outputs.
For Stability AI, the outcome is a relief but not a full exoneration. The company avoided secondary copyright liability in the UK on this set of facts and faces only a constrained trademark finding that may mostly apply to earlier model behaviour. But the broader argument about training datasets is still alive in other courts.
Investor reaction and business impact
Markets treated the decision as a modest negative for Getty in the short term. The stock fell in premarket trading after the ruling, reflecting disappointment that the company did not secure a stronger victory in a case that had been billed as a landmark for AI copyright. Later indications suggested a partial recovery as investors digested the limited financial impact and the potential upside of the “intangible article” precedent.
For Stability AI, which is not publicly listed, the more immediate issues are legal costs, reputational risk and the need to keep iterating its models and terms of use to avoid obvious sources of liability, such as visible third party watermarks in outputs.
The ruling also sends a signal to stock media competitors and AI platform operators:
- Stock providers that want to license their libraries for AI training, as Getty itself has done with Nvidia in the past, will watch these cases closely to see whether courts create new leverage for licensing or leave the legal environment ambiguous.
- AI model vendors will continue to weigh the costs of negotiating licences for curated datasets against the legal and reputational risks of training on scraped material.
How UK courts are positioning themselves on AI and IP
The Getty judgment slots into a growing line of UK and EU cases wrestling with generative AI and intellectual property.
Several themes are emerging:
- Courts are willing to adapt existing concepts, like “article” and “importation”, to AI systems, but are cautious about making sweeping statements that could unsettle broader software and data practices.
- Territoriality remains central. Where training happens, where models are hosted and where users interact with them all shape which legal tools are available.
- Trademarks and consumer confusion are often easier to apply to AI outputs than pure copyright theory, especially when visible brands or watermarks appear in generated content.
If legislators want a clearer, more predictable regime for AI training data, they may eventually need to update statutes rather than relying on courts to stretch old definitions case by case. Until that happens, we should expect more mixed outcomes like this one: partial wins, partial losses and lots of careful wording that lawyers will reuse in future filings.
Editor’s take
The headline that Getty “largely loses” its UK case is accurate, but the more interesting story is how thin the margins are around both sides’ claims. Getty could not prove enough about where Stable Diffusion was trained to win on the core copyright theory. Stability AI, for its part, has to live with a judicial finding that AI models can be treated as articles for copyright purposes and that outputs mimicking stock watermarks can infringe trademarks.
For the broader AI ecosystem, the decision keeps us in a familiar place. Training data lawsuits are hard, fact intensive and slow. There is still no definitive answer on whether scraping copyrighted images to train an AI model is lawful by default. But the contours of the argument are clearer, and each case adds another layer of precedent that future judges will have to either follow or distinguish.
In the meantime, companies that build or deploy image generators have two clear action items: stop models from emitting anything that looks like someone else’s watermark, and think carefully about where training happens and what logs and evidence you would be willing to show a court if challenged. The Getty case shows that those operational details are no longer just engineering questions. They are legal questions too.
Sources
- Reuters – Getty Images largely loses landmark UK lawsuit over AI image generator
- Yahoo Finance – Getty Images largely loses landmark UK lawsuit over AI image generator
- London Stock Exchange – Getty Images largely loses landmark UK lawsuit over AI image generator
- Wikipedia – Stable Diffusion, Getty Images v Stability AI background
- Wikipedia – Artificial intelligence and copyright
- Wikipedia – Getty Images and generative AI controversy


Leave a Reply Cancel reply