}#PageList1 {margin-bottom:0px} .content-outer { -webkit-box-shadow:none; box-shadow:none; } #ContactForm1 { display: none ! important; } -->

March 21, 2013

Digital Avatars Crossing The Uncanny Valley



NVIDIA Face Works

 
Computer Graphics
Two new projects point to a future where the uncanny valley the rift in our perception — where an artificial character goes from being realistic to creepy — being overcome.  University of Cambridge researchers have developed Zoe, an artificial talking head, and NVIDIA has demonstrated an incredible new computer graphic simulation of a face called, Face Works.
Meet Zoe: a digital talking head which can express human emotions on demand with "unprecedented realism" and could herald a new era of human-computer interaction.

The system, called "Zoe", is the result of a collaboration between researchers at Toshiba's Cambridge Research Lab and the University of Cambridge's Department of Engineering. —

Zoe, or her offspring, could be used as a visible version of Siri, as a personal assistant in smartphones, or to replace mobile phone texting with “face messaging” in which you “face-message” friends.

The lifelike face can display emotions such as happiness, anger, and fear, and changes its voice to suit any feeling the user wants it to simulate. Users can type in any message, specifying the required emotion, and the face recites the text. According to its designers, it is the most expressive controllable avatar ever created, replicating human emotions with unprecedented realism.

To recreate her face and voice, researchers recorded British actress Zoe Lister’s speech and facial expressions.

The framework behind “Zoe” could in the near future enable people to upload their own faces and voices to customize and personalize their own emotionally realistic, digital assistants. A user could, for example, text the message “I’m going to be late” and set her emotion to “frustrated.” A friend would then receive a “face message” that looked like the sender, repeating the message in a frustrated way.

Zoe Digital avatar

The team that created Zoe is currently looking for applications, and are also working with a school for autistic and deaf children, where the technology could be used to help pupils to “read” emotions and lip-read.

Ultimately, the system could have multiple uses — including gaming, robotics, audio-visual books, for delivering online lectures, and in other user interfaces.

“This technology could be the start of a whole new generation of interfaces which make interacting with a computer much more like talking to another human being,” Professor Roberto Cipolla, from the Department of Engineering, University of Cambridge, said.

The program used to run Zoe is just tens of megabytes in size, which means that it can be easily incorporated into even the smallest computer devices, including tablets and smartphones.

Uncanny Valley

It works by using a set of fundamental emotions. Zoe’s voice, for example, has six basic settings: Happy, Sad, Tender, Angry, Afraid and Neutral. The user can adjust these settings to different levels, as well as altering the pitch, speed and depth of the voice itself.

By combining these levels, it becomes possible to pre-set or create almost infinite emotional combinations. For instance, combining happiness with tenderness and slightly increasing the speed and depth of the voice makes it sound friendly and welcoming.

To make the system as realistic as possible, the research team collected a dataset of thousands of sentences, which they used to train the speech model with Lister. They also tracked Lister’s face while she was speaking using computer vision software. This was converted into voice and face-modelling algorithms that provided voice and image data needed to recreate expressions on a digital face, directly from the text alone.


 


Face Works

In related news, KurzweilAI.net also featured the annual GPU Technology Conference demonstation of, NVIDIA's “Face Works,” a technology made possible by their Titan graphics card, capable of 1TB/s of memory bandwidth.

NVIDIA is able to take 32GB of facial data (the bump maps, texture maps, lighting, expressions, etc) and compress it down to 400MB, in a new way of rendering highly realistic facial (and voice) expression.

NVIDIA Co-founder and CEO Jen-Hsun Huang showed a demo of Face Works that must be seen to be believed, amazingly realistic simulation of a face in this second segment of the opening day keynote at the conference.

Potential applications include animated video, videoconferencing with avatars and film virtual actors.

Here’s the NVIDIA demo:




SOURCE  University of Cambridge, KurzweilAI.net

By 33rd SquareSubscribe to 33rd Square


Enhanced by Zemanta

1 comment :

 
The Story of the Chessboard


The classic parable of how the inventor of the game of chess used his knowledge of exponential growth to trick an emperor is commonly used to explain the staggering and accelerating growth of technology. The 33rd square on the chessboards represents the first step into the second half of the chessboard, where exponential growth takes off.

33rd Square explores technological progress in AI, robotics, genomics, neuroscience, nanotechnology, art, design and the future as humanity encroaches on The Singularity.











Copyright 2012-2014 33rd Square | Privacy Policy | RSS | News | Submit an Article | Link to Us | Store | About Us | Contact Us