The question of whether the user (consumer) or the provider of the AI application is liable for infringement of intellectual property rights as a result of the use of artificial intelligence applications is an interesting question, and one that is becoming increasingly relevant as retailers deploy more and more AI applications. Four recent Getty Images vs. Stability AI decisionsth November 2025 shed a little more light on this thorny issue.
Responsibilities of intermediaries: an old story
The issue of the level of responsibility of intermediaries in retail supply and promotional chains is not new. As mentioned in the Getty Images v. Stability AI decision, the Court of Justice of the European Union (“CJEU”) has handed down judgments on the liability of search engine platforms such as Google regarding keyword advertising in the famous Google France SARL and Google Inc. v. Louis Vuitton Marétier SA (C-236/08), storage and shipping of goods in Coty Germany GmbH v. Amazon Services Europe Sàrl and others. (C‑567/18), and regarding the use of trademarks by third party advertisers in Daimler AG v. Egude Garage Gépjárműjavító és Értékesítő Kft. (Case C‑179/15). In all of these cases, intermediaries were held not liable for trademark infringement, essentially based on the defense of lack of active knowledge of the infringing activity and that the platform/party was not inherently active in the infringing activity. Essentially, it is determined that the parties have not used the trademark in the course of the transaction, which is an essential prerequisite for a finding of trademark infringement.
Now let’s talk about AI.
This brings us to the question of liability for AI applications, which is a new phenomenon to consider. Stability AI sought to utilize the reasoning offered in the Google France, Coty, and Daimler cases in the Getty case, arguing that the users (consumers) of its AI platform, and only the users, control the potential output of infringing images produced by its AI platform through prompts entered into the AI application. Stability AI essentially claimed it was not responsible. It was merely a tool and was not an active party to the infringing activity. Here, the court was considering the issue of trademark infringement based on the generation of the GETTY IMAGES and ISTOCK trademarks in images generated by the Stability AI application. In this regard, Stability AI lost.
In cross-examination of the witness, Stability Counsel AI asserted that:
“The model is a tool that you control, and the more detailed the prompts, the more control you impose.”
To this the witness replied:
“That’s partially true. The user can control what’s in the prompt. But what the user can’t control is what the model is trained on. We can’t control that. What the user can’t control are the semantic guardrails that are placed on the prompt and the semantic guardrails that are placed on the output. Sure, the user can control what they ask for, but they can’t control 100% of what gets output from the other side.”
Stability AI lost the case on this point. That’s because it could be argued that (a) the AI application controlled the data on which it was trained, and (b) it was not actively trying to create infringing images because, in the Getty case at least, the judge seemed perturbed by the fact that most users did not want the GETTY IMAGES or ISTOCK trademarks to appear on their output images.
This is where retailers and users are at fault.
The interactions between Stability AI and the witnesses cited above highlight some of the key issues regarding liability in AI applications. The key issue seems to be control. If the user does not have full control over the output of an AI application, it seems impossible to shift responsibility to the user. Detailed prompts that encourage infringing behavior, such as searches for counterfeit products, can increase the likelihood of holding users accountable. But that’s the whole point of AI applications, after all, given that users rarely have complete control over their output.
Retailers appear to want to focus on mitigating anti-infringement efforts through AI applications, rather than pushing profits onto users. Therefore, guardrails must be considered here. Interestingly, Stability AI introduced a so-called “filtering feature” to prevent applications from rendering photo-realistic caricatures of famous celebrities, which was added to combat concerns about fake news and propaganda. I scanned the prompts for famous names. So the company was at least aware of the possibility of some concerning activity taking place through its AI model.
So what are the real benefits for retailers who implement AI applications? In order for retailers to avoid liability as much as possible and stay within the scope of the theory laid out in the Google France, Coty and Daimler cases, in my eyes:
- Screen AI application training data to avoid infringers.
- Put guardrails on your AI applications to keep users away from infringing behavior.
- If any infringing activity is pointed out, we will implement deletion procedures and delete it.
The above will push retailers as far as possible into a group of inactive commercial enablers, similar to keyword advertising platforms such as Google, rather than actively participating in infringing activities. After all, the Google, Coty, and Daimler cases all concerned conduct that appeared to involve some kind of infringing activity, but these parties were successfully defended because they were deemed not active in such conduct.
However, the Getty v. Stability AI case makes clear that once an AI application is deemed to have used an infringing mark in the course of trade, the responsibility lies with the owner/manager of the AI application, which could potentially become the retailer in the future.
The content of this article is intended to provide a general guide on the subject. You should seek professional advice regarding your particular situation.
