Deep Tech HAND-PICKED STORIES

Emotion AI: How Algorithms Read Human Emotions

Human emotions are complex, vivid, enriching, delightful, and the bedrock of social interactions. 

What are humans without a rich palette of emotions? Machines! Robots!

Emotions take center stage in humanity’s existence, so why should machines be deprived?

 

Tune into Emotion AI

Emotion Artificial Intelligence, or affective AI, is a field of computer science that helps machines understand human emotions. It is a subset of artificial intelligence (the broad term for machines replicating how humans think) that measures, understands, simulates, and reacts to human emotions. 

Pioneered by MIT Media Lab, the idea is to help machines develop empathy. Empathy is a complex concept with many strings attached to it, but on a basic level, it means understanding another person’s emotional state. In theory, if machines can have that level of understanding, they can serve us better.

Emotion AI, as a tool, allows for much more natural interaction between humans and machines.

Machines are very good at analyzing large amounts of data. They can analyze images and pick up subtleties in micro-expressions on humans’ faces that might happen too fast for a person to recognize.

 

Emotion Reading Algorithms

In the context of data and machine learning, the concept of ‘reading emotions’ falls under the umbrella term – ‘Cognitive Science’

What does a cognitive emotion detection algorithm accomplish? It replicates the human thought process based on training data (in the form of images and videos of humans) and segments the emotions present in this data.

The algorithm consists of three parts: image processing, facial feature extraction, and emotion detection. The image processing stage extracts the face and facial components using a fuzzy color filter, a virtual face model, and histogram analysis. The facial feature extraction stage filters the features. The emotion detection stage recognizes emotion from extracted features with the fuzzy classifier.

Cognitive science studies the human brain and understands its principles of intelligence. The expectation is that by building computer systems from the knowledge of human intelligence, machines will be able to mimic learning and develop intelligent behavior patterns like humans.

It is worthwhile to understand ‘Sentiment Analysis’ to get a holistic view of emotion reading algorithms.

The analysis of human sentiments, also called the mining of opinions, is the study of different states of the human brain. Natural language processing, computational linguistics, text mining, and analysis of biometrics make sentiment analysis possible.

The primary task of a sentiment analysis program is to isolate the polarity of the input (text, speech, facial expression, etc.) and understand whether the underlying sentiment presented is positive, negative, or neutral. Based on the initial analysis, programs dig deeper into identifying emotions like enjoyment, happiness, disgust, anger, fear, and surprise.

The Face Emotion Recognizer (known as the FER) is an open-source Python library used for sentiment analysis of images and videos. The output of the algorithm looks like this:

 

A breakthrough or a challenge?

When AI gets involved with human emotion, there are understandably a lot of alarms raised. The fear is that if machines could read emotions, they would gain sentience and potentially manipulate ours. It is a valid concern.

Any business interested in applying this technology should promote a healthy discussion that includes its benefits and what is possible with the technology.

As these technologies roll out, they need to be appropriate for all people, not just sensitive to the subset of the population used for training.

For instance, recognizing emotions in an African American face can be difficult for a machine trained on Caucasian faces.

Will the technology work for humanity’s benefit or detriment? Time will tell.

 

Sources:

Forbes

MIT Sloan

Towards Data Science

Emotion Reading Algorithms