Increasingly, machines are talking back. And big tech groups like Microsoft Corp and Alphabet Inc. are grabbing their piece of the pie and running with it.  For example, Microsoft’s Xiaoice, Chinese for “little Bing,” has already talked with over 600 thousand people on the phone since its launch. The company recently acquired Berkeley-based Semantic.  Semantic uses machine learning to add contextual information received by AI systems during human-chatbot conversations and applying that information to future dialogue.  Alphabet Inc.’s Google has a personal assistant that can make calls and carry on task-related conversations, like scheduling appointments.

An increase in data for computers to learn from, improved processing power, and machine learning have led us to the next generation of AI.  These machines are increasingly competitive in tasks that have always been perceived as human: the ability to recognize faces, turn face sketches into photos, recognize speech and play Go.

AI intelligence may far surpass our human organic computing power, but there is more to understanding and responding appropriately in conversations than a series of algorithms.  Humans “read” the expressions, posture, tone, and inflections of the speaker and compile data with the words spoken and generate an emotional understanding.  A perfect example is the six basic facial expressions that are universal for all cultures–  angry, sad, fear, disgust, surprise and happy.  There is a very real difference between “Sure!” said with a smile because someone offers you a cupcake and a “Sure!” said with a frown because you just got told you need to stay overtime on a Friday.  But what about sarcasm? Flirtation?  What “reads” as a good response in one culture may be a bad one in a different culture, so how do we account for that?

Microsoft’s executive vice president of AI and research, Harry Shum, calls “full duplex” tech capable of engaging in “human-like verbal conversations” the next step in a world transformed by AI. “We need agents and bots to balance the smarts of IQ with EQ – our emotional intelligence,” he writes in a recent blog post. An emotionally intelligent AI has several potential benefits, be it to give someone a companion or to help us perform certain tasks – ranging from criminal interrogation to talking therapy.  But there are also ethical problems and risks involved.  Will we deem it ethical to leave an aged human with Dementia in the care of an AI companion?

Without the ability to experience emotions, it is unlikely AI will ever reach the iconic repertoire of HAL 900 in Arthur C. Clarke’s Space Odyssey series.  But still, technology presses onward in its quest to mimic human beings and master the art of conversation.


Leave a Reply

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team.

Thanks for signing up for our newsletter!