Chinese technology company Huawei is working on a new artificial-intelligence-powered voice assistant that will be more aware of emotions and more interactive than current products on the market, according to the company.
While existing voice assistants like the Google Assistant, Amazon’s Alexa, and Apple’s Siri have varying capabilities, they all serve a more functional role in informing the user or taking directions. Huawei launched a voice assistant for the Chinese market back in 2013, and it claims to have 110 million users of its service on a daily basis. But Felix Zhang, VP of software engineering at Huawei’s consumer business group, told CNBC that the company now wants to provide emotional interactions with its AI assistant.
“We think that, in the future, all our end users wish [that] they can interact with the system in the emotional mode,” Zhang said. “This is the direction we see in the long run.” The concept is for the technology to be able to sense a user’s emotions and respond accordingly. This could mean that the AI detects the type of language used or the user’s tone of voice. But ultimately, Huawei wants to ensure that the interaction between a human and the company’s AI products are a genuine conversation that can be sustained for as long as possible.
“The first step is [to] give your assistant a high IQ,” said James Lu, director of AI product management at Huawei’s consumer business group, adding that the next step will be to give it a “high EQ — emotional quotient”.
Internet of Business says
The concept of emotional and emotion-sensing machines is a complex one. Emotion-sensing robots, such as SoftBank’s Pepper, have been commercially popular in Japan, but have little sophistication in emotional terms, and comparatively few apps – compared with earlier versions of SoftBank’s/Aldebaran’s NAO robots, for example.
The concept also demands that development teams themselves have a sophisticated understanding of human emotions – something that MIT Media Lab’s Joichi Ito suggested might be a problem when he spoke at last year’s World Economic Forum in Davos. Taking part in a forum on AI ethics with IBM’s Virginia Rometty and Microsoft’s Satya Nadella, Ito called some of his own AI-app-developing students “oddballs” and suggested that they tend to prefer the binary world of computers to the messy, emotional world of human beings.
More, he suggested that the closed, and usually male-dominated, environment of programmers was a further challenge in the development of AI systems that need to interact with human beings ethically, fairly, sensitively, and responsively. The context for his remarks was his revelation that MIT’s researchers had inadvertently designed a facial recognition system that was unable to recognise an African American woman because it had not been trained to do so by the team of white male coders. Whether these same general challenges apply in China to the same extent is hard to judge. But one thing should be clear: AI is deeply rooted in human society, and is not some separate and distinct layer, or a form of blank machine intelligence.
The underlying point is that AI has a cultural dimension because most AIs are trained within a specific culture or context. Any distinct cultural differences between, say, China or Japan and the UK or US would be significant in this regard, because people express feelings and emotions differently, and in different circumstances. Any AI system that is trained amongst a billion people within China may have to be significantly adapted for other markets, and that would mean training it in a global context – just as any emotion-sensing product developed in any part of the world would need to have a broad and culturally diverse pallet of training data to draw from.