The main difference between modern search engines and the search of previous decades is that there is no temporal delay, says Expect Labs CEO and founder Tim Tuttle. You no longer have to hit a button to get your results. Google is anticipating your query as you type it and constantly tweaking your results as your intent becomes clearer.
“Everything needs to be continuous and everything needs to be real time,” Tuttle said, speaking on a panel at Gigaom’s Structure Data conference Thursday. “What that means is there is no batch processing of data. You should expect your users will be sending you a continuous streams of context signals. … That should propagate through your content analysis and your rankings and your machine learning algorithms in real time so you can always deliver incrementally better recommendations and incrementally better results.”
Both Tuttle and his panel mate, SwiftKey co-founder and CTO Ben Medlock are on the forefront of building this kind of persistent search and analysis architecture in the field of natural language processing. SwiftKey not only scrapes every phrase uttered on the public internet to build its language models, Medlock said, but it also analyzes the phrasing and language usage of the individual users of its predictive keyboard. In a constant exchange between those too data models, SwiftKey is able to determine not only the word you started to type out, but also predict the next word in your sentence.
ExpectLabs has built an app called MindMeld that essentially listens to your conversations, proffering up relevant data on your tablet’s screen based on its analysis of your discussion. MindMeld started out tapping a single information source, Wikipedia, but it’s expanding into other sources and will ultimately allow its anticipatory search technology to access any site or database, Tuttle said.
The biggest obstacle holding back this concept of persistent search is latency, Tuttle and Medlock explained. While the processing of data may be near instantaneous, that data’s round-trip between your device and the cloud is not. SwiftKey, for instance, overcomes that issue today by doing much of its predictive analysis on the phone, Medlock said. But ultimately this kind of data analysis requires accessing the cloud where the vast reams of data are stored and the algorithms that form the linkages between them reside.
That will change though, Tuttle predicts. By the end of the decade, we could be packing around phones capable of storing a terabyte of data, which is enough capacity to hold Google’s entire knowledge graph, Tuttle said. Our phones will have the ability to search a large fraction of human knowledge in a millisecond, Tuttle said.
[protected-iframe id=”89a5b6983d5ff9f54d4f81a2a845721e-14960843-61002135″ info=”http://new.livestream.com/accounts/74987/events/2795568/videos/45660182/player?autoPlay=false&height=360&mute=false&width=640″ width=”640″ height=”360″ frameborder=”0″ scrolling=”no”]
Photo by Jakub Mosur