Embodiment and Artificial Intelligence

Monday, February 6, 2012



A big buzzword in artificial intelligence research these days is 'embodiment', the idea that intelligence requires a body or in the case of practicality, a robot. Embodiment theory was brought into Artificial Intelligence most notably by Rodney Brooks in the 1980s. Brooks showed that robots could be more effective if they 'thought' (planned or processed) and perceived as little as possible. The robot's intelligence is geared towards only handling the minimal amount of information necessary to make its behavior be appropriate and/or as desired by its creator.

In the last few years, a growing body researchers have begun to explore the possibility that this definition is too limited. Led by Rolf Pfeifer at the Artificial Intelligence Laboratory at the University of Zurich, Switzerland, these researchers say that the notion of intelligence makes no sense outside of the environment in which it operates. Pfeifer's lab created the ECCEROBOT (pictured above) to help test some of these notions.


For researchers like Pfeifer, the notion of embodiment must, of course, capture how the brain is embedded in a body but also how this body is embedded in the broader environment.

Today, Pfeifer and Matej Hoffmann, also at the University of Zurich, set out this thinking in a kind of manifesto for a new approach to AI. And their conclusion has far reaching consequences. They say it's not just artificial intelligence that we need to redefine, but the nature of computing itself.


According to Pfeifer and Hoffmann,
...if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes.*
The paper takes the form of a number of case studies examining the nature of embodiment in various physical systems. For example, Pfeifer and Hoffmann look at the distribution of light-sensing cells within fly eyes.

Biologists have known for 20 years that these are not distributed evenly in the eye but are more densely packed towards the front of the eye than to the sides. What's interesting is that this distribution compensates for the phenomenon of motion parallax.

When a fly is in constant forward motion, objects to the side move across its field of vision faster than those to the front.  "This implies that under the condition of straight flight, the same motion detection circuitry can be employed for motion detection for the entire eye," point out Pfeifer and Hoffmann.

That's a significant advantage for the fly. With any other distribution of light sensitive cells, it would require much more complex motion detecting circuitry.

Instead, the particular distribution of cells simplifies the problem. In a sense, the morphology of the eye itself performs a computation. A few years a go, a team of AI researchers built a robot called Eyebot that exploited exactly this effect.

AI researcher Ben Goertzel has also begun to look more concretely at embodiment as being key to bringing about Artificial General Intelligence (AGI).  In an earlier paper, that was not so much the case.
Embodiment is important, it’s incredibly useful as a learning mechanism for minds – but we shouldn’t get carried away with it and assume that all non-embodied mechanisms for getting information into AI’s are Bad Things.  Rather, in a sufficiently flexible AGI framework, it’s possible to have embodiment and also utilize the approaches typically associated with anti-embodiment philosophies.  This may have the effect of making both the pro- and anti-embodiment schools of thought unhappy with one’s work.  However, it may also provide the maximum rate of progress toward actually creating AGI.

Technology Review