Computers have emotions too: New study shows AI can learn techniques to recognize emotions with 98% accuracy

Applications of AI


The rise of artificial intelligence is one of the world’s most influential and talked-about technological advances. Some say that its rapid increase in functionality has made it a part of our daily lives and now resides in our living rooms, threatening our jobs.

AI will allow machines to operate with some degree of human-like intelligence, but where humans will always outperform machines is their ability to show emotion in context. But what if AI could be used to: Machines and technologies that automatically recognize emotions?

New research by Brunel University in London, Bonab University in Iran and Islam Azad University uses EEG, a test that measures the electrical activity of the brain, and signals from artificial intelligence to use signals from artificial intelligence for automated emotion recognition to classify emotions. Developed a computer model. Over 98% accuracy.

By focusing on data and training algorithms, computers can be taught to process data in the same way as the human brain. This branch of artificial intelligence and computer science is called machine learning, where computers are taught to mimic the way humans learn.

Dr. Severan Danishvar, Research Fellow at Brunel, said: “Generative Adversarial Networks, known as GANs, are important algorithms used in machine learning that allow computers to mimic the workings of the human brain. Since EEG signals come directly from the central nervous system, they are strongly associated with a wide range of emotions.

“Through the use of GANs, a computer learns how to perform a task after seeing examples and training data. It can then improve accuracy over time by creating new data.”

A new study published in a journal electronicsused music to stimulate emotions in 11 volunteers between the ages of 18 and 32.

Participants were instructed to abstain from alcohol, drugs, caffeine and energy drinks for 48 hours prior to the experiment, and none had depressive disorders.

During the study, all volunteers listened to 10 songs on headphones. Pleasant music was used to induce positive emotions and sad music was used to induce negative emotions.

Participants listened to music and were connected to an electroencephalograph, which used electroencephalographic signals to recognize emotions.

In preparation for the study, the researchers used an existing EEG signal database to create a GAN algorithm. The database contains data on emotions evoked by musical stimuli and was used as a model for real EEG signals.

As expected, music evokes positive and negative emotions depending on the music played, resulting in a high similarity between real EEG signals and those modeled by the GAN algorithm. it was done. This indicates that GAN was effective in generating his EEG data.

Dr. Danishvar said: “The results show that the proposed method has an accuracy of 98.2% in distinguishing between positive and negative emotions. Compared with previous studies, the proposed model shows good performance. , which could be used for future brain-computer interface applications, including the ability of robots to identify human emotional states and interact with people accordingly.

“For example, robotic devices could be used in hospitals to energize and mentally prepare patients before major surgery.

“Future research should investigate additional emotional responses in GANs, such as anger and disgust, to make the model and its applications even more useful.”

Reporter:

Nadine Palmer, media representative

+44 (0)1895 267090
nadine.palmer@brunel.ac.uk



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *