Can your body’s reaction to music predict hit songs?

Machine Learning


advertisement

Why do some songs soar to the top of the charts and others plummet? New research suggests that the secret that distinguishes hits lies in the listener’s brain, and that artificial intelligence can analyze physiological signals to uncover the secret. But other “hit song science” researchers aren’t ready to declare victory just yet.

Researchers at Claremont Graduate University used a wearable, smartwatch-like device to track the heart responses of people listening to music. They used algorithms to transform these data into what they called proxies of neural activity. Monitors focused on responses related to attention and emotion. A machine learning model trained on these data allowed him to classify whether a song was a hit or a failure with 97% accuracy. This discovery Artificial intelligence frontier Early this month.

The research is the latest and possibly most successful attempt to solve the decades-old “hit-song science problem,” in which automated methods such as machine learning software enable songs to be recorded before they are released. It suggests that it is possible to predict whether it will hit or not. Some critics have suggested that the technology could reduce music production costs, curate public playlists, and even render TV talent show judges obsolete. The new model’s near-perfect accuracy in predicting song popularity jeopardizes the intriguing potential of transforming the creative process for artists and the distribution process for streaming services. But the research also raises concerns about the reliability and ethical implications of merging artificial intelligence and brain data.

advertisement

“This research could be groundbreaking, but only if it can be replicated and generalized. There are a lot of biases that can come into play,” says Hoda Khalil, a data scientist at Carleton University in Ontario. “And even if we had enough statistical evidence to generalize, we still have to consider how this model could be abused. This technology cannot go far beyond ethical considerations.” you can’t.”

So far, determining what qualities are associated with popular songs has been more of an alchemy than a science. Music industry professionals have traditionally utilized large databases to analyze the lyrical and sonic aspects of hit songs, such as tempo, explicitness, and danceability. However, this prediction method is only marginally better than a random coin toss.

In 2011, machine learning engineers at the University of Bristol in the UK developed a “hit latent equation” that analyzed the characteristics of 23 songs to determine their popularity. This equation was able to classify hits with 60% accuracy. Khalil and his colleagues also analyzed data from more than 600,000 songs and found significant correlations between various sonic characteristics and the number of weeks a song remained on the Billboard Hot 100 or Spotify Top 50 charts. I found no. Even entrepreneur Mike McCready, who coined the term “hit song science,” was scrutinized by researchers who later determined that not enough science existed at the time to support his theory.

advertisement

The new approach has expired, says Paul Zack, a neuroeconomist at Claremont Graduate University and the lead author of the new study. His team sought to explore how humans behave rather than focusing on the songs themselves. response to them. “The connection just seemed too simple. Songs are designed to create emotional experiences in people, and those emotions come from the brain,” Zac says.

He and his team have teamed 33 wearable heart sensors that use light waves that penetrate the skin to monitor changes in blood flow, the same way traditional smartwatches and fitness trackers detect heart rate. equipped by a person. Attendees listened to 24 songs, ranging from Tones & I’s mega-hit “Dance Monkey” to NLE Chopper’s commercial hit “Dekario (Pain).” The participants’ heart rate data were then provided through a commercial platform, Immersion Neuroscience, which the researchers algorithmically converted into a measure of combined attentional and emotional resonance known as immersion. I claim to. According to the researchers, these immersion signals were able to predict hit songs with moderate accuracy without machine learning analysis, and hit songs caused a higher sense of immersion. In contrast, subjective rankings of how much participants enjoyed a song did not accurately represent the song’s eventual public popularity.

Co-founder and current Chief Immersion Officer of Immersion Neuroscience, Zak, explains the rationale behind using cardiac data as a surrogate for neural responses. A strong emotional response prompts the brain to synthesize oxytocin, a “feel good” neurochemical, and increases activity in the vagus nerve, which connects the brain, gut and heart, he said.

advertisement

Not everyone agrees. “This study rests on a neurophysiological measure of immersion, which requires further scientific validation,” says Max Planck, a neuroscientist at the University of Bergen, Norway, and Leipzig Human Cognitive Brain Sciences Research. says Stefan Kersch, a visiting researcher at the Institute. Germany. Professor Kölsch also noted that the study cited several papers to support the legitimacy of “immersion”, many of which were co-authored with Zack, and not all of which were published in peer-reviewed journals. It also points out that no

It’s not the first time scientists have used brain signals to predict a song’s popularity. In 2011, researchers at Emory University used functional magnetic resonance imaging (fMRI), which measures brain activity by detecting changes in blood flow, to predict the commercial success or failure of a song. bottom. They found that a weak response in the nucleus accumbens, a region that regulates how our brains process motivation and reward, accurately predicted 90 percent of songs that sold less than 20,000 copies. But while this method was great at identifying less successful music, he only had a 30% chance of predicting hits.

fMRI approaches are somewhat unrealistic, apart from their low predictive power. A typical fMRI session lasts at least 45 minutes and requires participants to endure the discomfort of being confined in a cold, sterile room, which can be claustrophobic for some. So if a portable, lightweight smartwatch could actually measure an individual’s neural activity, it could revolutionize the way researchers approach the field of hit-song science.

advertisement

It may also be too good, says Kölsch. Based on his previous research on musical enjoyment and brain activity, he’s skeptical not only of immersion, but of the very idea that machine learning models can capture the complex nuances that make a song a hit. For example, in 2019 Kölsch and his colleagues conducted an original study on musical enjoyment. This involves using machine learning to determine how predictable the chords of songs are, and using fMRI scans to study how participants’ brains respond to those songs. It included doing While the first study revealed a relationship between predictability and emotional response, Kölsch has been unable to replicate those findings since. “It’s very difficult to find reliable indicators of even the roughest difference between pleasant and unpleasant music, let alone the subtle differences that make good music a hit,” he says. “That’s why I’m skeptical.” At the time of his publication, Zack did not respond to a request for comment on criticism of his recent study.

However, if these recent results are successfully replicated, the new model may have immense commercial potential. For Zak, its primary use is not necessarily in creating new songs, but in efficiently sorting through a huge number of existing songs. According to him, the research began when a music streaming service approached his group. Zak said the team of streamers was overwhelmed by the amount of new music released each day (tens of thousands), and they were trying to identify the tracks that really resonated with listeners (without manually parsing each one). It says.

With the new model, “we can deliver appropriate entertainment to viewers based on their neurophysiology,” Zak said in a press release for the study. “Instead of being offered hundreds of choices, just two or three choices will make it easier and faster to choose the music you enjoy.” Envisioned as an opt-in service that will be anonymized and shared only if the user signs a consent form.

advertisement

“As wearable devices become cheaper and more common, this technology can passively monitor brain activity and recommend music, movies and TV shows based on that data,” Zak said. says. “Who wouldn’t want that?”

But even if this approach works, the prospect of combining mind-reading and machine learning to predict hit songs still poses an ethical dilemma. “If we could train a machine learning model to understand how different types of music affect brain activity, could it be easily exploited to manipulate people’s emotions?” Khalil says. She points out that relying solely on the opt-in approach of such services often fails to protect users from invasion of privacy. “Many users just accept the terms and conditions without reading them,” says Khalil. “This increases the chances of data being unintentionally shared and misused.”

Our favorite songs may not seem like intimate personal data, but they can be a window into someone’s moods, preferences, and habits. And when those details are combined with personalized data about brain activity, you have to consider how much information you’re willing to give up to create the perfect playlist.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *