33rd Square explores technological progress in AI, robotics, genomics, futurism, space, neuroscience, nanotechnology, art, design, singularity personalities like Ray Kurzweil and the emerging future as where the Singularity Is Near.
Thursday, February 23, 2012
Google’s Move From Keyword Searching to Knowledge Searching
Google's Knowledge Graph is set to make big changes to search and may be the first incarnation of AI. In a recent interview with Mashable, Google Fellow and SVP Amit Singhal says Google doesn’t understand the questions you ask it. If you ask it “the 10 deepest lakes in the U.S,” it will give you a very good result based on the keywords in the phrase and sites with significant authority on those words and even word groupings, but “We cross our fingers and hope someone on the web has written about these things or topics.”
The future of Google Search, though, could be a very different story. In an extensive conversation, Singhal, who has been in the search field for 20 years, outlined a developing vision for search that takes it beyond mere words and into the world of entities, attributes and the relationship between those entities. In other words, Google’s future search engine will not only understand your lake question but know a lake is a body of water and tell you the depth, surface areas, temperatures and even salinities for each lake.
Search, Singhal explained in the interview, started as a content-based, keyword index task that changed little in the latter half of the 20th century, until the arrival of the World Wide Web, that is. Suddenly search had a new friend: links. Google, Amit said, was the first to use links as “recommendation surrogates.” In those early days, Google based its results on content links and the authority of those links. Over time, Google added a host of signals about content, keywords and you to build an even better query result.
Eventually Google transitioned from examining keywords to meaning. “We realized that the words ‘New’ and ‘York’ appearing next to each other suddenly changed the meaning of both those words.” Google developed statistical heuristics that recognized that those two words appearing together is a new kind of word. However, Google really did not yet understand that New York is a city, with a population and particular location.
Still, word sequences and the meaning they have is something, but not enough for Google or Singhal, who was recently elected to the National Academy of Engineering.
Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.
The work towards this is already underway. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said Singhal.
In 2010, Google purchased Freebase, a community-built knowledge base packed with some 12 million canonical entities. Twelve million is a good start, but Google has, according to Singhal, invested dramatically to “build a huge knowledge graph of interconnected entities and their attributes.”
The transition from a word-based index to this knowledge graph is a fundamental shift that will radically increase power and complexity. Singhal explained that the word index is essentially like the index you find at the back of a book: “A knowledge base is huge compared to the word index and far more refined or advanced.”
Right now Google is, Singhal says, building the infrastructure for the more algorithmically complex search of tomorrow, and that task, of course, does include more computers. All those computers are helping the search giant build out the Knowledge Graph, which now has “north of 200 million entities.” What can you do with that kind of knowledge graph (or base)?
Initially, you just take baby steps. Although evidence of this AI-like intelligence is beginning to show up in Google Search results, most people probably haven’t even noticed it.
Type “Monet” into Google Search, for instance, and, along with the standard results, you’ll find a small area at the bottom: “Artwork Searches for Claude Monet.” In it are thumbnail results of the top five or six works by the master. Singhal says this is an indication that Google search is beginning to understand that Monet is a painter and that the most important thing about an artist is his greatest works.
When noted that this does not seem wildly different or more exceptional that the traditional results above, Singhal cautioned writer Lance Ulanoff that judging the knowledge graph’s power on this would be like judging an artist on work he did as a 12- or 24-month-old.
It’s also worth noting that millions of people now believe they already have AI search thanks to Apple’s iPhone 4S and Siri, the intelligent assistant and its True Knowledge competitor, Evi. It uses the information it can access on your phone and through the web to answer natural language questions. Whatever Google’s Knowledge Graph can do, it clearly needs to go beyond Siri’s brand of AI.
Pinpointing exactly how far you can take the”search of the future,” however, is somewhat difficult for Singhal. “We’re building the ‘hadron collider.’ What particles will come out of it, I can’t predict right now,” he said.
On the other hand, Singhal does admit that it is his dream to build the Star Trek computer. Like Siri, you could ask this computer, which appeared on the 1960s sci-fi TV show, virtually any question and get an intelligent answer. “All aspects of computing or AI improve when you have such an infrastructure in-house,” said Singhal, referring to the massive knowledge graph Google is building. “You can process query or question much better, and you get a step closer to building the Star Trek computer,” he said.
Another frontier that mill benefit from the power of Google’s Knowledge Graph is robotics. Singhal is, admittedly, no expert, but noted that robotics, which exists at the intersection of mechanical engineers and computing, struggles when it comes to language capabilities. “I believe we are laying the foundation for how robotics would incorporate language into the future of robot-human interaction,” he said. Linking Google's Robot Operating System (ROS) to Knowledge Graph will have enormous benefits to robotics development, and if you support embodiment as a path to strong AI, the impacts there are considerable.
Future robots with access to Google’s entity-based search engine via ROS might be able to understand that the “tiny baby” they’re caring for is small, fragile and always hungry. The robot might even know how to feed the baby because it would know the entity “always hungry” has to be cross-referenced with the fact that it’s a “baby,” which is also an entity in the knowledge graph, and includes attributes like “no solids.”
How many entities would it take for the Google’s Knowledge Graph to know the answer to everything? Singhal laughed and suggested,
“The beauty of the human mind is that it can build things and decide things in ways we didn’t think were possible, and I think the best answer I can give right now is that the human mind would keep creating knowledge and I see what we’re building in our knowledge graph as a tool to aid the creation of more knowledge. It’s an endless quantitative cycle of creativity.”
Freebase had 12 million entries when it was acquired by Google, and Singhal says that the Knowledge Graph has now exceeded 200 million. What will really push the system beyond today's recognized brand of search into the realm of intelligence may be when all of the celebrity news, biographies, recipes, product reviews, Facebook entries, research papers and other data on the web is accompanied with other purely instructional materials. Singhal states in the interview with Ulanoff that the system now is like talking to a 12-24 month old child. Perhaps if that child were learning 'baby talk', and the environment around them as a human child (along with reading encyclopedic entries and the detritus of the world-wide-web), the spark of long-promised human-level artificial intelligence may arise. Chris Horton at B2B Community has an interesting take on the possibility of the Knowledge Graph yielding intelligence based on the Singhal interview as well:
With deeper knowledge comes a greater capacity for judgment, both good and bad. How will Google legislate how an infinitely more complex intellectual “organism” forms judgments?If Google’s search engine was an isolated robot, the answer to this question might be more intriguing than concerning. But as a businessperson who is increasingly dependent upon Google’s favorable judgment for my livelihood, the prospect of an omniscient search engine leaves me a bit unnerved.