August 26, 2013
Machines May Learn Like Us
|Studies have found that neural network computer models, which are used in a growing number of applications, may learn to recognize patterns in data using the same algorithms as the human brain.|
A growing number of experiments with neural networks are revealing that these models behave strikingly similar to actual brains when performing certain tasks. Researchers say the similarities suggest a basic correspondence between the brains’ and computers’ underlying learning algorithms.
The algorithm used by a computer model called the Boltzmann machine, invented by Geoffrey Hinton and Terry Sejnowski in 1983, appears particularly promising as a simple theoretical explanation of a number of brain processes, including development, memory formation, object and sound recognition, and the sleep-wake cycle.
Hinton — the great-great-grandson of the 19th-century logician George Boole, whose work is the foundation of modern computer science — has always wanted to understand the rules governing when the brain beefs a connection up and when it whittles one down — in short, the algorithm for how we learn. “It seemed to me if you want to understand something, you need to be able to build one,” he said. Following the reductionist approach of physics, his plan was to construct simple computer models of the brain that employed a variety of learning algorithms and “see which ones work,” said Hinton, who splits his time between the University of Toronto, where he is a professor of computer science, and Google.
For decades, many of Hinton’s computer models languished. But thanks to advances in computing power, scientists’ understanding of the brain and the algorithms themselves, neural networks are playing an increasingly important role in neuroscience.
Early on, Hinton’s attempts at replicating the brain were limited. Computers could run his learning algorithms on small neural networks, but scaling the models up quickly overwhelmed the available hardware. However, in 2005, Hinton discovered that if he sectioned his neural networks into layers and ran the algorithms on them one layer at a time, which approximates the brain’s structure and development, the process became more efficient.
Although Hinton published his discovery in two top journals, neural networks had fallen out of favor by researchers. But, in the years since, the theoretical learning algorithms have been put to practical use in a surging number of applications, such as the Google Now personal assistant and the voice search feature on Microsoft Windows phones.
Neural networks have recently hit their stride thanks to Hinton’s layer-by-layer training method, the use of high-speed computer chips called graphical processing units (GPU), and an explosive rise in the number of images and recorded speech available to be used for training. The networks can now correctly recognize about 88 percent of the words spoken in normal, human, English-language conversations, compared with about 96 percent for an average human listener. They can identify cats and thousands of other objects in images with similar accuracy and in the past three years have come to dominate machine learning competitions.
Giulio Tononi, head of the Center for Sleep and Consciousness at the University of Wisconsin-Madison, has found that gene expression inside synapses changes in a way that supports this hypothesis: Genes involved in synaptic growth are more active during the day, and those involved in synaptic pruning are more active during sleep.
Olshausen and his research team recently used neural network models of higher layers of the visual cortex to show how brains are able to create stable perceptions of visual inputs in spite of image motion. In another recent study, they found that neuron firing activity throughout the visual cortex of cats watching a black-and-white movie was well described by a Boltzmann machine.
A potential application of that work is in the building of neural prosthesis, such as an artificial retina. With an understanding of “the formatting of information in the brain, you would know how to stimulate the brain to make someone think they are seeing an image,” Olshausen said.
SOURCE Quanta Magazine
|By 33rd Square||Subscribe to 33rd Square|
Tags: AI , artificial intelligence , brain , brain initiative , Geoffrey Hinton , gpu , neural network , sparse coding , Terry Sejnowski
33rd Square explores technological progress in AI, robotics, genomics, neuroscience, nanotechnology, art, design and the future as humanity encroaches on The Singularity.