Google has a confession to make: It does not understand you. If you ask it “the 10 deepest lakes in the U.S,” it will give you a very good result based on the keywords in the phrase and sites with significant authority on those words and even word groupings, but Google Fellow and SVP Amit Singhal says Google doesn’t understand the question. “We cross our fingers and hope someone on the web has written about these things or topics.” The future of Google Search, though, could be a very different story. In an extensive conversation, Singhal, who has been in the search field for 20 years, outlined a developing vision for search that takes it beyond mere words and into the world of entities, attributes and the relationship between those entities. In other words, Google’s future search engine will not only understand your lake question but know a lake is a body of water and tell you the depth, surface areas, temperatures and even salinities for each lake.
To understand where Google is going, however, you need to know where it’s been.
Search, Singhal explained, started as a content-based, keyword index task that changed little in the latter half of the 20th century, until the arrival of the World Wide Web, that is. Suddenly search had a new friend: links. Google, Amit said, was the first to use links as “recommendation surrogates.” In those early days, Google based its results on content links and the authority of those links. Over time, Google added a host of signals about content, keywords and you to build an even better query result.
Eventually Google transitioned from examining keywords to meaning. “We realized that the words ‘New’ and ‘York’ appearing next to each other suddenly changed the meaning of both those words.” Google developed statistical heuristics that recognized that those two words appearing together is a new kind of word. However, Google really did not yet understand that New York is a city, with a population and particular location.
Still, word sequences and the meaning they have is something, but not enough for Google or Singhal, who was recently elected to the National Academy of Engineering.
Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.
It’s a challenging task, but the work has already begun. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said Singhal.
In 2010, Google purchased Freebase, a community-built knowledge base packed with some 12 million canonical entities. Twelve million is a good start, but Google has, according to Singhal, invested dramatically to “build a huge knowledge graph of interconnected entities and their attributes.”
The transition from a word-based index to this knowledge graph is a fundamental shift that will radically increase power and complexity. Singhal explained that the word index is essentially like the index you find at the back of a book: “A knowledge base is huge compared to the word index and far more refined or advanced.”
Right now Google is, Singhal told me, building the infrastructure for the more algorithmically complex search of tomorrow, and that task, of course, does include more computers. All those computers are helping the search giant build out the knowledge graph, which now has “north of 200 million entities.” What can you do with that kind of knowledge graph (or base)?
Initially, you just take baby steps. Although evidence of this AI-like intelligence is beginning to show up in Google Search results, most people probably haven’t even noticed it.
Type “Monet” into Google Search, for instance, and, along with the standard results, you’ll find a small area at the bottom: “Artwork Searches for Claude Monet.” In it are thumbnail results of the top five or six works by the master. Singhal says this is an indication that Google search is beginning to understand that Monet is a painter and that the most important thing about an artist is his greatest works.
When I note that this does not seem wildly different or more exceptional that the traditional results above, Singhal cautioned me that judging the knowledge graph’s power on this would be like judging an artist on work he did as a 12- or 24-month-old.
It could be seen as somewhat ironic that Google is addressing what has been a key criticism leveled at it by its chief search competitor, Microsoft Bing. The software giant ran a series of scathing commercials, which while never mentioning Google by name, depicted the search results most people get as comically lacking context. Most people understood that the criticism and joke was aimed at Google, and now Google is doing something about the quality of its results.
When I asked Singhal if he had thought about Bing’s criticism and realized that Bing has long advertised that its results focus more on useful answers instead of links, Singhal deflected, saying he couldn’t comment on what Bing may or may not be doing.
It’s also worth noting that millions of people now believe they already have AI search thanks to Apple’s iPhone 4S and Siri, the intelligent assistant. It uses the information it can access on your phone and through the web to answer natural language questions. Whatever Google’s Knowledge Graph can do, it clearly needs to go beyond Siri’s brand of AI.
Pinpointing exactly how far you can take the”search of the future,” however, is somewhat difficult for Singhal. “We’re building the ‘hadron collider.’ What particles will come out of it, I can’t predict right now,” he said.
On the other hand, Singhal does admit that it is his dream to build the Star Trek computer. Like Siri, you could ask this computer, which appeared on the 1960s sci-fi TV show, virtually any question and get an intelligent answer. “All aspects of computing or AI improve when you have such an infrastructure in-house,” said Singhal, referring to the massive knowledge graph Google is building. “You can process query or question much better, and you get a step closer to building the Star Trek computer,” he said.
Speaking of Star Trek, there is another frontier that might benefit from the power of Google’s Knowledge Graph: Robotics. Singhal is, admittedly, no expert, but noted that robotics, which exists at the intersection of mechanical engineers and computing, struggles when it comes to language capabilities. “I believe we are laying the foundation for how robotics would incorporate language into the future of robot-human interaction,” he said.
It’s an exciting thought. Being a robot geek, I proceeded to paint a picture of the future that Singhal did not disagree with: Future robots with access to Google’s entity-based search engine might be able to understand that the “tiny baby” they’re caring for (What? You wouldn’t leave your baby with a robot?) is small, fragile and always hungry. The robot might even know how to feed the baby because it would know the entity “always hungry” has to be cross-referenced with the fact that it’s a “baby,” which is also an entity in the knowledge graph, and includes attributes like “no solids.”
As we talked, it occurred to me that while 200 million entities is a lot, the world of knowledge is vast. How many entities would it take for the Google’s Knowledge Graph to know the answer to everything? Singhal laughed and instead of pinpointing a number spun the question around:
“The beauty of the human mind is that it can build things and decide things in ways we didn’t think were possible, and I think the best answer I can give right now is that the human mind would keep creating knowledge and I see what we’re building in our knowledge graph as a tool to aid the creation of more knowledge. It’s an endless quantitative cycle of creativity.”
To understand where Google is going, however, you need to know where it’s been.
Search, Singhal explained, started as a content-based, keyword index task that changed little in the latter half of the 20th century, until the arrival of the World Wide Web, that is. Suddenly search had a new friend: links. Google, Amit said, was the first to use links as “recommendation surrogates.” In those early days, Google based its results on content links and the authority of those links. Over time, Google added a host of signals about content, keywords and you to build an even better query result.
Eventually Google transitioned from examining keywords to meaning. “We realized that the words ‘New’ and ‘York’ appearing next to each other suddenly changed the meaning of both those words.” Google developed statistical heuristics that recognized that those two words appearing together is a new kind of word. However, Google really did not yet understand that New York is a city, with a population and particular location.
Still, word sequences and the meaning they have is something, but not enough for Google or Singhal, who was recently elected to the National Academy of Engineering.
Big Changes Coming
Google now wants to transform words that appear on a page into entities that mean something and have related attributes. It’s what the human brain does naturally, but for computers, it’s known as Artificial Intelligence.
It’s a challenging task, but the work has already begun. Google is “building a huge, in-house understanding of what an entity is and a repository of what entities are in the world and what should you know about those entities,” said Singhal.
In 2010, Google purchased Freebase, a community-built knowledge base packed with some 12 million canonical entities. Twelve million is a good start, but Google has, according to Singhal, invested dramatically to “build a huge knowledge graph of interconnected entities and their attributes.”
The transition from a word-based index to this knowledge graph is a fundamental shift that will radically increase power and complexity. Singhal explained that the word index is essentially like the index you find at the back of a book: “A knowledge base is huge compared to the word index and far more refined or advanced.”
Right now Google is, Singhal told me, building the infrastructure for the more algorithmically complex search of tomorrow, and that task, of course, does include more computers. All those computers are helping the search giant build out the knowledge graph, which now has “north of 200 million entities.” What can you do with that kind of knowledge graph (or base)?
Initially, you just take baby steps. Although evidence of this AI-like intelligence is beginning to show up in Google Search results, most people probably haven’t even noticed it.
Knowledge Graph at Work Today
Type “Monet” into Google Search, for instance, and, along with the standard results, you’ll find a small area at the bottom: “Artwork Searches for Claude Monet.” In it are thumbnail results of the top five or six works by the master. Singhal says this is an indication that Google search is beginning to understand that Monet is a painter and that the most important thing about an artist is his greatest works.
When I note that this does not seem wildly different or more exceptional that the traditional results above, Singhal cautioned me that judging the knowledge graph’s power on this would be like judging an artist on work he did as a 12- or 24-month-old.
It could be seen as somewhat ironic that Google is addressing what has been a key criticism leveled at it by its chief search competitor, Microsoft Bing. The software giant ran a series of scathing commercials, which while never mentioning Google by name, depicted the search results most people get as comically lacking context. Most people understood that the criticism and joke was aimed at Google, and now Google is doing something about the quality of its results.
When I asked Singhal if he had thought about Bing’s criticism and realized that Bing has long advertised that its results focus more on useful answers instead of links, Singhal deflected, saying he couldn’t comment on what Bing may or may not be doing.
It’s also worth noting that millions of people now believe they already have AI search thanks to Apple’s iPhone 4S and Siri, the intelligent assistant. It uses the information it can access on your phone and through the web to answer natural language questions. Whatever Google’s Knowledge Graph can do, it clearly needs to go beyond Siri’s brand of AI.
Pinpointing exactly how far you can take the”search of the future,” however, is somewhat difficult for Singhal. “We’re building the ‘hadron collider.’ What particles will come out of it, I can’t predict right now,” he said.
On the other hand, Singhal does admit that it is his dream to build the Star Trek computer. Like Siri, you could ask this computer, which appeared on the 1960s sci-fi TV show, virtually any question and get an intelligent answer. “All aspects of computing or AI improve when you have such an infrastructure in-house,” said Singhal, referring to the massive knowledge graph Google is building. “You can process query or question much better, and you get a step closer to building the Star Trek computer,” he said.
Beyond Search
Speaking of Star Trek, there is another frontier that might benefit from the power of Google’s Knowledge Graph: Robotics. Singhal is, admittedly, no expert, but noted that robotics, which exists at the intersection of mechanical engineers and computing, struggles when it comes to language capabilities. “I believe we are laying the foundation for how robotics would incorporate language into the future of robot-human interaction,” he said.
It’s an exciting thought. Being a robot geek, I proceeded to paint a picture of the future that Singhal did not disagree with: Future robots with access to Google’s entity-based search engine might be able to understand that the “tiny baby” they’re caring for (What? You wouldn’t leave your baby with a robot?) is small, fragile and always hungry. The robot might even know how to feed the baby because it would know the entity “always hungry” has to be cross-referenced with the fact that it’s a “baby,” which is also an entity in the knowledge graph, and includes attributes like “no solids.”
As we talked, it occurred to me that while 200 million entities is a lot, the world of knowledge is vast. How many entities would it take for the Google’s Knowledge Graph to know the answer to everything? Singhal laughed and instead of pinpointing a number spun the question around:
“The beauty of the human mind is that it can build things and decide things in ways we didn’t think were possible, and I think the best answer I can give right now is that the human mind would keep creating knowledge and I see what we’re building in our knowledge graph as a tool to aid the creation of more knowledge. It’s an endless quantitative cycle of creativity.”
No comments:
Post a Comment