Apple Music announced a new policy requiring record labels and music distributors to disclose if artificial intelligence was used to create content uploaded to the streaming platform.
The new requirements announced Wednesday introduce a system known as “transparency tags” that will require labels to indicate that AI is involved in various aspects of their releases. This disclosure applies to album artwork, individual tracks, musical works, including lyrics, and music videos distributed to the Platform.
Under this framework, labels and distributors would have to flag releases where artificial intelligence generated a significant portion of the content. The requirements cover four specific areas: artwork, track-level recording, musical compositions, and visual elements of music videos.
For artwork, this tag applies when AI meaningfully contributes to the album visuals, including both static designs and motion graphics. At the track level, tags are required if the AI has generated a significant part of the sound recording.
Similarly, composition tags apply when AI is used to generate lyrics or other elements of a song’s structure, and music video tags are used when artificial intelligence generates a substantial visual component within a video release.
According to Billboard, Apple Music described the tagging system as a “tangible first step toward the transparency the industry needs to establish best practices and policies that work for everyone.”
This tag is designed to identify instances where artificial intelligence played a meaningful role in the production of content, rather than instances where AI tools were only used for minor technical assistance during production.
The move places Apple Music among multiple streaming platforms that are responding to the rapid increase in AI-generated music across the industry. For example, French streaming service Deezer recently revealed that around 60,000 songs, completely AI-generated, are uploaded to its platform every day.
Other platforms have adopted different strategies to manage this trend. Deezer has developed a proprietary detection technology that automatically identifies and labels AI-generated tracks while excluding them from editorial and algorithmic recommendations.
Meanwhile, Spotify has focused on limiting fraudulent activities such as deepfakes, artificial streaming manipulation, and spam. The company is also working with music metadata organization Digital Data Exchange (DDEX) to develop broader standards for AI disclosure in the industry.
Hi-res streaming platform Qobuz has also introduced a policy mandating labels for releases that are entirely AI-generated, while prioritizing human-created music in its editorial selection and recommendation system.
Some services have stricter measures in place. Bandcamp bans music that is wholly or substantially generated by artificial intelligence, and iHeartRadio launched a “Guaranteed Human” initiative to exclude AI-generated songs from its radio programming.
But Apple Music’s approach stops short of banning AI-assisted content. Instead, the platform is opting for transparency, allowing listeners to see how artificial intelligence has influenced a release and decide for themselves how to engage with it.
As the use of generative technology in music production continues to expand, industry insiders say platforms and artists alike are still exploring where the line should be between innovation, creativity and authenticity.
Ademide Adebayo
follow me:
