|Stanford researchers, working with Google and NVIDIA, have created a new neural network system for machine learning that is six times the size of the unit built last year that taught itself how to recognize cats on the internet.|
When Stanford researchers teamed up with GoogleX last year, they created the world's largest artificial neural network designed to simulate a human brain. This project famously created a system that could recognize cats on the internet.
Now Andrew Ng, who directs Stanford's Artificial Intelligence Lab and was involved with Google's previous neural endeavor, has taken the project a step further. He and his team have created another neural network, more than six times the size of Google's record-setting achievement.
|Image Source: NVIDIA|
The model Google developed in 2012 was made up of 1.7 billion parameters, the digital version of neural connections.
The technology is being presented at the International Conference on Machine Learning in Atlanta this week. Details of the neural network are also available in a press release from NVIDIA.
This is all still very low compared to the human brain, with some 100 trillion connections, but at this rate of amplification it won't be long until AIs are at the capacity of human-level intelligence.
Machine learning, a fast-growing branch of the artificial intelligence (AI) field, is the science of getting computers to act without being explicitly programmed. In the past decade, machine learning has given us self-driving cars, effective web search and a vastly improved understanding of the human genome. Many researchers believe that it is the best way to make progress towards human-level AI.
|By 33rd Square||Subscribe to 33rd Square|