April 22, 2013
Researchers Connect Entropy To Intelligent Behaviour
This story was originally published by Inside Science News Service.
A radical concept could revise theories addressing cognitive behavior.
|The second law of thermodynamics, which states that entropy can only increase, dictates that a complex system always evolves toward greater disorderliness in the way internal components arrange themselves. In a new paper, two researchers explore a mathematical extension of this principle that focuses not on the arrangements that the system can reach now, but on potential future outcomes. They argue that simple mechanical systems that are postulated to follow this rule show features of “intelligence,” hinting at a connection between this most-human attribute and fundamental physical laws.|
Alexander Wissner-Gross, a physicist at Harvard University and the Massachusetts Institute of Technology, and Cameron Freer, a mathematician at the University of Hawaii at Manoa, developed an equation that they say describes many intelligent or cognitive behaviors, such as upright walking and tool use.
The researchers suggest that intelligent behavior stems from the impulse to seize control of future events in the environment. This is the exact opposite of the classic science-fiction scenario in which computers or robots become intelligent, then set their sights on taking over the world.
The findings describe a mathematical relationship that can "spontaneously induce remarkably sophisticated behaviors associated with the human 'cognitive niche,' including tool use and social cooperation, in simple physical systems," the researchers wrote in a paper published today in the journal Physical Review Letters.
"It's a provocative paper," said Simon DeDeo, a research fellow at the Santa Fe Institute, who studies biological and social systems. "It's not science as usual."
Wissner-Gross, a physicist, said the research was "very ambitious" and cited developments in multiple fields as the major inspirations.
The mathematics behind the research comes from the theory of how heat energy can do work and diffuse over time, called thermodynamics. One of the core concepts in physics is called entropy, which refers to the tendency of systems to evolve toward larger amounts of disorder. The second law of thermodynamics explains how in any isolated system, the amount of entropy tends to increase. A mirror can shatter into many pieces, but a collection of broken pieces will not reassemble into a mirror.
"[The paper] is basically an attempt to describe intelligence as a fundamentally thermodynamic process," said Wissner-Gross.
The researchers developed a software engine, called Entropica, and gave it models of a number of situations in which it could demonstrate behaviors that greatly resemble intelligence. They patterned many of these exercises after classic animal intelligence tests.
In one test, the researchers presented Entropica with a situation where it could use one item as a tool to remove another item from a bin, and in another, it could move a cart to balance a rod standing straight up in the air. Governed by simple principles of thermodynamics, the software responded by displaying behavior similar to what people or animals might do, all without being given a specific goal for any scenario.
"It actually self-determines what its own objective is," said Wissner-Gross. "This [artificial intelligence] does not require the explicit specification of a goal, unlike essentially any other [artificial intelligence]."
Entropica's intelligent behavior emerges from the "physical process of trying to capture as many future histories as possible," said Wissner-Gross. Future histories represent the complete set of possible future outcomes available to a system at any given moment.
Wissner-Gross calls the concept at the center of the research "causal entropic forces." These forces are the motivation for intelligent behavior. They encourage a system to preserve as many future histories as possible. For example, in the cart-and-rod exercise, Entropica controls the cart to keep the rod upright. Allowing the rod to fall would drastically reduce the number of remaining future histories, or, in other words, lower the entropy of the cart-and-rod system. Keeping the rod upright maximizes the entropy. It maintains all future histories that can begin from that state, including those that require the cart to let the rod fall.
|Image Source: Alexander Wissner-Gross|
The research may have applications beyond what is typically considered artificial intelligence, including language structure and social cooperation.
DeDeo said it would be interesting to use this new framework to examine Wikipedia, and research whether it, as a system, exhibited the same behaviors described in the paper.
"To me [the research] seems like a really authentic and honest attempt to wrestle with really big questions," said DeDeo.
One potential application of the research is in developing autonomous robots, which can react to changing environments and choose their own objectives.
"I would be very interested to learn more and better understand the mechanism by which they're achieving some impressive results, because it could potentially help our quest for artificial intelligence," said Jeff Clune, a computer scientist at the University of Wyoming.
Clune, who creates simulations of evolution and uses natural selection to evolve artificial intelligence and robots, expressed some reservations about the new research, which he suggested could be due to a difference in jargon used in different fields.
Wissner-Gross indicated that he expected to work closely with people in many fields in the future in order to help them understand how their fields informed the new research, and how the insights might be useful in those fields.
The new research was inspired by cutting-edge developments in many other disciplines. Some cosmologists have suggested that certain fundamental constants in nature have the values they do because otherwise humans would not be able to observe the universe. Advanced computer software can now compete with the best human players in chess and the strategy-based game called Go. The researchers even drew from what is known as the cognitive niche theory, which explains how intelligence can become an ecological niche and thereby influence natural selection.
The proposal requires that a system be able to process information and predict future histories very quickly in order for it to exhibit intelligent behavior. Wissner-Gross suggested that the new findings fit well within an argument linking the origin of intelligence to natural selection and Darwinian evolution -- that nothing besides the laws of nature are needed to explain intelligence.
Although Wissner-Gross suggested that he is confident in the results, he allowed that there is room for improvement, such as incorporating principles of quantum physics into the framework. Additionally, a company he founded is exploring commercial applications of the research in areas such as robotics, economics and defense.
"We basically view this as a grand unified theory of intelligence," said Wissner-Gross. "And I know that sounds perhaps impossibly ambitious, but it really does unify so many threads across a variety of fields, ranging from cosmology to computer science, animal behavior, and ties them all together in a beautiful thermodynamic picture."
Wissner-Gross also recently gave a keynote at the Foresight Technical Conference on “Bringing Computational Programmability to Nanostructured Surfaces” (see video below.) Wissner-Gross shows that atoms can play catchup by becoming increasingly programmable.
SOURCE Inside Science
SOURCE Inside Science
|By Chris Gorski, Inside Science News Service.||Subscribe to 33rd Square|
Tags: AI , Alexander Wissner-Gross , artificial intelligence , Cameron Freer , Ed Boyden , entropy , Harvard University , Massachusetts Institute of Technology , physics
33rd Square explores technological progress in AI, robotics, genomics, neuroscience, nanotechnology, art, design and the future as humanity encroaches on The Singularity.