Did Google's Duplex AI Just Beat the Turing Test?

Thursday, May 10, 2018

Did Google's Duplex AI Just Beat the Turing Test?

⯀ Google recently announced a new capability for the Google Assistant that might be available in the not-to-distant future—the ability to make phone calls on your behalf. CEO Sundar Pichai demonstrated actual phone call recordings that he said was placed by the Assistant to a hair salon and to a restaurant. This was not just the synthetic-sounding Google Assistant voice either. The voice sounded natural and human. The person on the other end had no idea they were talking to a digital AI helper. By many acceptance standards Turing's famous test was passed (albeit in a limited domain).

A long-time goal of human-computer interaction, and an expectation established in science fiction, has been to enable people to have a natural conversation with computers. With the recent  at I/O 2018 demonstration by Google's CEO Sundar Pichai of  new artificial intelligence updates for Google's Assistant, this goal seems incredibly close.

“The amazing thing is that Assistant can actually understand the nuances of conversation,” he said. “We’ve been working on this technology for many years. It’s called Google Duplex.”

The demonstration audio really needs to be heard to be appreciated. The Assistant voice is incredibly natural, using breath sounds, speech elements like, "Umm," and "Hmm." The voice that does not sound like it is coming from a computer. Moreover, in the conversations, the system seems to handle very problematic factors in human conversation, like interruptions, unclear speech and even accents.
With Google Duplex, the potential for next-level human-computer interaction is upon us. The  technology is in a limited domain so far—directed towards completing specific tasks, such as scheduling certain types of appointments. For such tasks, Duplex makes the conversational experience as natural as possible, allowing people to speak normally, like they would to another person, without having to adapt to a machine.

"One of the key research insights was to constrain Duplex to closed domains, which are narrow enough to explore extensively. Duplex can only carry out natural conversations after being deeply trained in such domains. It cannot carry out general conversations," state the researchers in a blog post.

The Google Duplex technology is built to sound natural, to make the conversation experience comfortable, according to Google. "It’s important to us that users and businesses have a good experience with this service, and transparency is a key part of that. We want to be clear about the intent of the call so businesses understand the context. We’ll be experimenting with the right approach over the coming months."

Duplex is based on a recurrent neural network (RNN) built using TensorFlow Extended (TFX). To obtain its high precision, the team trained Duplex’s RNN on a numbrt of anonymized phone conversation data.

Google Duplex

The network uses the output of Google’s automatic speech recognition (ASR) technology, as well as features from the audio, the history of the conversation, the parameters of the conversation and more. They trained our understanding model separately for each task, but leveraged the shared corpus across tasks. Finally, they used hyperparameter optimization from TFX to further improve the model.

It is difficult not to be astonished by the results of this demonstration

The Google Duplex system is capable of carrying out sophisticated conversations and it completes the majority of its tasks fully autonomously, without human involvement. The system has a self-monitoring capability, which allows it to recognize the tasks it cannot complete autonomously (e.g., scheduling an unusually complex appointment). In these cases, it signals to a human operator, who can complete the task.

As this type of technology has been evangelized and worked on by Ray Kurzweil for years, it would be interesting to know how much of his role at Google brought him and his ideas into contact with the Duplex team.

With this new speaking ability, Google seems very close passing the Turing test. Proposed by English computer scientist Alan Turing in 1950, it's a way of potentially evaluating a machine's ability to demonstrate intelligent behavior. To pass the Turing test, a computer's natural language responses would have to sound just like a human's. So far the domain of Duplex is limited, but we are dealing with exponential technology.

Related articles
By this summer, the team will start testing the Duplex technology within the Google Assistant, to help users make restaurant reservations, schedule hair salon appointments, and get holiday hours over the phone.

You will now also be able to choose among six new voices for Google Assistant for your Android phone or Google Home, potentially the ones used in this demo. Google also said the Assistant can speak in 30 languages and will be in 80 countries by the end of the year, including seven new ones: Denmark, Korea, Mexico, the Netherlands, Norway, Spain and Sweden. Amazon's Echo is also available in more than 80 countries.
For those of us concerned about technological unemployment, the hairs on your neck are probably standing straight up by now. Clearly all jobs that involve answering a telephone, and responding to questions have a target on them now. This includes help desks, customer service, and much more. How much longer will the Assistant be talking to a person, rather than just to another chatbot? The sociological and psychological effects of this technology are also a huge unknown.

"We need to set the right expectations with everyone," Nick Fox, VP of product and design for Google Assistant and Search told CNET. "These are implementation questions of the technology that I'd say, humbly, we don't quite know all the answers to yet, and need to be figured out as we see this in the real world."

SOURCE  Google AI Blog

By  33rd Square