A US-based recording artist has filed a copyright lawsuit against Stability AI and music licensing company AudioSparx, adding to a wave of legal challenges over the use of copyrighted music in training generative AI systems. In a complaint filed in the U.S. District Court for the Western District of North Carolina on December 29, 2025, Jerry Anders, who performs as Anders Manga, accuses the companies of illegally using his music to train AI audio generation products.
The lawsuit names Stability AI, which created the Stable Audio tool, and Navarr Enterprises, which operates the commercial music licensing platform AudioSparx. Mr. Anders, who lives in North Carolina, argues that the alleged infringement caused harm within the state, thereby voiding the court's jurisdiction. Importantly, this case comes at a time when courts are increasingly scrutinizing how AI companies source and monetize training data.
Against this backdrop, Mr. Anders is asking the court to award him statutory damages, grant him injunctive relief preventing further use of his work, and order him to pay his legal costs, as well as a jury trial on the claim.
Details of the allegations
According to the complaint, Jerry Anders alleges that Stability AI and Navarr Enterprises, doing business as AudioSparx, infringed his copyright through the training and commercial deployment of AI audio generation systems.
Anders states that prior to the alleged infringement, he had written, released, and registered several original songs and sound recordings with the U.S. Copyright Office. This registration was filed on July 7, 2009 and predates its use by the defendants.
The complaint further alleges that Stability AI operates an AI music generation platform known as “Stable Audio.” Meanwhile, AudioSparx allegedly provided complete music recordings through a licensed music library for use in training the AI system. Mr. Anders claimed that through this arrangement, the defendants had access to his copyrighted works before and during his training at Stable Audio and copied them without his permission.
Importantly, Anders pointed to his 2015 contract with AudioSparx, which he claims does not give him permission to copy, reproduce, or use his work for AI training. He also claims that AudioSparx did not advertise or describe AI licenses or AI training as part of its business when signing the contract. The complaint then alleges that Anders asked AudioSparx to remove his music from its catalog months before Stable Audio's public release, and then attempted to opt out of the use of his work in AI-related matters. However, AudioSparx reportedly denied those requests.
Despite repeated challenges and formal requests to remove his work from AI-related datasets and licenses in late 2025, the complaint alleges that the defendants continued to commercially exploit Stable Audio and monetize models trained on his recordings, resulting in continued infringement.
Copyright litigation and the situation in India
Under the Indian Copyright Act, 1957, copyright infringement centers on the unauthorized reproduction, storage, or communication of a protected work. However, Indian courts have yet to conclusively decide whether the ingestion of copyrighted material for AI training constitutes “copying” under the law. The issue remains unresolved and there is no binding precedent to date, including the ongoing OpenAI vs. ANI case in the Delhi High Court.
Intellectual property (IP) lawyer Rahul Ajashatru said that “in the history of copyright law around the world, non-human expression (and by extension, intelligence) has never been considered,” adding that there is currently no precedent set through court decisions on this issue. Ajatshatru questions whether the “fair dealing” or “fair use” defenses against infringement can or should be extended to non-human intelligence.
The Indian Copyright Act of 1957 does not allow for “fair use” in the broad, open-ended sense found in the United States. Instead, certain 'fair dealing' exceptions are provided under Article 52. This permits limited use of copyrighted materials without permission for clearly defined purposes, such as private or personal use (including research), criticism or review, reporting on current events, or use in judicial proceedings. Ajashatru further argued that AI should not fall under the ambit of fair dealing, saying, “Artificial intelligence should not be treated on a par with human intelligence in the law as far as privileges and protections are concerned,” adding that exceptions such as research should not apply to AI models.
Additionally, he said liability becomes clearer when copyrighted works are obtained illegally. “If training requires storing or handling copyrighted material without permission, strict liability arises and must be recognized by law. Willful negligence is not a defense,” he said.
why is this important
This case highlights the growing fault lines in the generative AI music ecosystem, particularly between large rights holders and individual artists. Meanwhile, major record labels are increasingly negotiating from a stronger position. Deals such as Warner Music's license agreement with Suno signal a move toward formal commercial arrangements that allow AI companies to train models with legal certainty while providing significant compensation to rights holders. But such avenues remain largely inaccessible to independent musicians.
In contrast, these lawsuits demonstrate that individual artists often have to resort to litigation to assert control over their work, especially when companies allegedly ignore opt-out requests or contractual restrictions.
As a result, the dispute raises questions about whether existing licensing structures adequately protect creators who are unable to take advantage of major catalogues. It also exposes a practical imbalance, as AI companies can strike global deals with labels, while smaller artists often face opaque data pipelines and limited visibility into how their music is being used.
Additionally, this case has implications for broader policy debates, including in India, about whether copyright law should evolve to explicitly address AI training or continue to rely on interpretations of reproduction and lawful sourcing.
Importantly, courts around the world are increasingly focusing on how training data is obtained, rather than just the training itself. As AI-generated music tools proliferate, the outcome of such disputes will determine whether consent and compensation become the default norm or an exception negotiated only by the most powerful players.
Please also read
Please support our journalism by subscribing
for youSource link
