Why AI doesn't really understand how you feel (and why it matters)

Machine Learning


AI is definitely changing our lives, but there is still a lot of hype surrounding it. One of the most popular stories today is that machines are learning to understand human emotions and emotions.

This is the realm of emotional computing, the field of AI research and development, and is involved in interpreting, simulating and predicting emotions and emotions.

The idea is that emotional AI leads to more convenient, accessible and secure applications.

But can the machine really really understand emotions? After all, they can't really feel – they can only mimic analysis, estimate, or imitation based on limited, often superficial models of human behavior.

This has not stopped companies from pouring billions into building tools and systems designed to recognize our emotions, respond empathically, and make us fall in love with them.

So, when we talk about emotional AI, what are we really talking about? With therapy and dating emerging as a top use case for generation AI, can we trust to handle the emotional sensation system responsibly when we cannot actually experience emotions?

Or do you want to promote the concept of the “final frontier” of a human-like machine, where the entire concept is concocted by marketers and we know them? Let's take a look:

Understand artificial emotional intelligence

First of all, what does emotions mean in relation to machines? Well, the simple answer is that emotions are nothing more than data in a different form of machinery.

Emotional computing focuses on detection, interpretation, and response of data about human emotional states. This can be collected from audio recordings, image recognition algorithms trained with facial data, analysis of written text, or how you click when you move your mouse and shop online.

It can also include biometric data such as heart rate, skin temperature, and body electrical activity.

Emotional AI tools analyze patterns of this data and use it to interpret or simulate emotional interactions with us. This includes customer service bots that detect driver mental states and detect corresponding frustration and vehicle systems.

However, emotions are very open to interpretation and complex things (including various geography and cultures) that are very important that they are not misunderstood.

The more data you have in emotional or emotional AI apps, the more likely you are to simulate human emotions more closely and accurately predict and respond to emotional needs.

It's not enough for the machine to really “feel” it. In fact, research suggests that machines process data much more rapidly than the brain.

Instead, it is a much greater complexity in our brains when compared to the most sophisticated artificial neural networks and machine learning models, allowing us to truly feel and empathize.

Emotional AI Ethics

This raises some important ethical questions. When we don't fully understand our ability to understand us, is it right for machines to make decisions that could affect our lives?

For example, we may be able to feel cautious or even scared to warn us against doing dangerous things. But do you know that it doesn't scare us too much in proportion to the threat in ways that can cause us trauma and pain?

And do chatbots and AIs designed to act as virtual girlfriends, partners, or lovers understand the meaning of inciting or manipulating human emotions such as love, jealousy, sexual attraction?

Exaggerating the machine's ability to understand our emotions poses certain risks that we must give serious thought.

For example, if people believe that AI understands or empathizes more than they actually are, they are not entirely informed when it comes to trusting that decision.

This may be considered a form of operation. In particular, it is inevitable that the true purpose of AI is not to actually support users, but to promote spending, engagement, or influence.

Risk and rewards

Emotional AI development is a large company as it is seen as a way to provide a more personalized and engaging experience and predict and influence our behavior.

Tools like IMENTIV were used for recruitment and training to better understand how candidates respond to stressful situations, and cameras were used in the Sao Paulo subway to detect passenger emotional responses to advertising.

In one controversial use case, the UK Railway Operator Network Rail reportedly sent passenger video data to Amazon's sentiment analysis service without collecting consent.

The increased and potential invasion of privacy (in our view) has led lawmakers to take action in some jurisdictions. For example, the European Union AI Act prohibits the use of AI to detect workplace and school sentiment.

One reason for this is the risk of bias. It has already been shown that the machine's ability to accurately detect emotional responses differs depending on race, age, and gender. For example, in Japan, smiles are often used to hide negative emotions than in other parts of the world.

This opens up the possibility that AI can drive new forms of discrimination. This is a threat that must be understood and prevented.

It was emotional

In conclusion, it is clear that AI cannot truly “feel” but dismissing the meaning of our ability to understand our emotions is a serious mistake.

The very idea of ​​letting the machine read our minds by understanding our emotional responses rings the alarm bell for many. It clearly creates dangerous opportunities to jump up by unintended people.

At the same time, emotional computing can hold the key to unlocking therapies that can help people, improving the efficiency, convenience and safety of the services they use.

It's up to us, as developers, regulators, or simply as AI users, to ensure that these new technical capabilities are integrated with society in a responsible way.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *