The Next Frontier Of Artificial Intelligence: Building Machines That Read Your Emotions

The blueprint for today’s artificial intelligence, neural network and machine learning technology is the human brain, simply because it is the most effective tool for solving problems that we know of.

Image result for emotions and AI

However, there is a big part of the puzzle missing – the aspect of human intelligence that we know as emotional intelligence or empathy. Emotional intelligence is what allows us to take the feelings and considerations of other people into account in the solutions we make. And so far, progress here has been limited. Alexa, helpful as she may be in some circumstances, will not consider your feelings, or those of others affected by her actions, as she keeps your smart home running smoothly.

But all that could be about to change – in fact, have to change if AI is to reach its potential as a tool for assisting in our business and day-to-day lives. Recently, emotion-focused AI developer Affectiva became one of the few small businesses to be asked to join the Partnership on AI to Benefit People and Society. The interest of the “grand masters” of AI which make up the partnership – Google, Microsoft, Facebook, etc – in Affectiva’s growing business is a sure sign that this overlooked aspect of AI is starting to get the attention it deserves.

Image result for emotions and AI

 Affectiva co-founder and CEO Rana el Kaliouby talked to me about her company’s work to develop what she calls “multi-modal emotion AI”. There may already be a growing understanding of how sentiment analysis can help machines understand how humans are feeling, and adapt their behaviour accordingly. But a great deal of valuable data which communicate our emotional state is lost if machines cannot also read our expressions, gestures, speech patterns, the tone of voice or body language.

“There’s research showing that if you’re smiling and waving or shrugging your shoulders, that’s 55% of the value of what you’re saying – and then another 38% is in your tone of voice.  “Only seven percent is in the actual choice of words you’re saying, so if you think about it like that, in the existing sentiment analysis market which looks at keywords and works out which specific words are being used on Twitter, you’re only capturing 7% of how humans communicate emotion, and the rest is basically lost in cyberspace.”

Affectiva’s Emotion AI technology is already in use by 1,400 brands worldwide, including CBS, MARS and Kellogg’s, many of whom use it to judge the emotional effect of adverts by asking viewers to switch on their cameras while the video plays. The facial images are analysed with deep learning algorithms which accurately classify them according to the feelings of the viewer.

Image result for emotions and AI

Another application of the technology could be important in ensuring that automated cars – particularly level 2 and 3 autonomous vehicles – can safely assess the condition of the driver (for example – are they awake and sober?) before switching from automated to manual driving mode. “Emotional intelligence is how you understand yourself and the people around you, and it is just as important as cognitive, or rational intelligence, to how we make decisions,” el Kaliouby tells me.

“People who don’t have that often really struggle with operating effectively in a world where other people live.” So while it’s true that machines have been, by our own standards, largely sociopathic until now – unable to comprehend feelings and take them into account – that could be set to change.

Related image

“I’m a big believer that in the next three to five years, this is going to be ubiquitous, and our devices will have ‘emotion chips’ and a variety of sensors for the state of people around it – it’s a natural evolution from the mouse and keyboard we used to use, to touch interfaces, and most recently the way devices have become conversational. The next stage of that evolution is for devices to become perceptual.”

Taking a seat on the Partnership, of course, obliges Affectiva to deeply assess the moral and ethical implications of the tech they are developing, but el Kaliouby says that has always been a cornerstone of her company’s activity. “Privacy is a very big issue because emotions are obviously very personal data and you can easily see how it could be abused to manipulate people – for example, to manipulate voters. We have always been very definite that we are big on consent, and for everything we do, people have to opt-in.”

The company has even gone out of its way to avoid working with personal data gained in non-consensual circumstances – for example in security or military applications, even if those fields clearly offer opportunities. “Bias is another issue … bias can be built into the data, there are algorithms that detect not just your emotions but your age, gender, ethnicity – we are very thoughtful and careful about how we use that data.”

Posted in AI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s