Last week, when Google released a concept video on how augmented reality glasses might work, it caused reactions ranging from skepticism to premature predictions of market mayhem. And parodies; lots of parodies. Should the industry take “Project Glass” seriously? Does Google have a truly disruptive user interface technology in its labs?
Certainly Google must overcome major technology and design barriers before bringing even a prototype to market. If any products show up this year – and Google’s not even hinting at how they might roll out – they’ll probably resemble location-based overlays of info snippets and alerts related to phone-based messaging. Think heads-up display of caller ID, text messages and, possibly, targeted local offers, rather than the voice-activated virtual agent of the vision video.
But implementation aside, does the video show that Google is thinking about a UI that could truly drive innovation, or is it just impractical science fiction? I’d say it is the former, based on key factors present in the groundbreaking UIs of MacOS, iOS, Nintendo consoles and Microsoft Kinect:
- Input and output. Siri isn’t just speech-recognition. Besides the fact that it is doing lots of semantic analysis behind the scenes to figure out which sources to search and which apps to launch, Siri offers audible answers and follow-up requests for further detail. It’s that two-way give and take that makes it a potential game-changer versus voice-to text input mechanisms on other phones.
- Contextual optimization. Early desktop GUIs were well suited for their device (keyboard, big screen, mouse) and their function (general purpose application and file management, personal productivity). Some day on-screen TV navigation will be optimized for genre and visual browsing based on personal preferences via remote as well as iOS is for laptop tablet browsing and light communications.
- Easy-to-learn. Innovative UIs can gain fast adoption via the use of metaphor the way desktop GUIs mimicked documents, files and folders. Or they can teach users how to use new techniques the way videogames present training missions or simple tasks to gain familiarity.
- Practical-to-use. There can be a big difference between easy-to-learn and easy-to-use, or rather effective-to-use. The UIs that gain the most widespread use can gracefully move from one to the other. Keyboard shortcuts and macros may be powerful, but they’re too hard to learn for the masses.
How does Project Glass stack up?
Go back and re-watch the video. Google shows glasses that blend the heads-up display of contextually relevant information and application options with voice-command based input. The applications it features are optimized for on-the-go activities like mapping and communications rather than, for instance, gaming or Google Docs. The augmented reality approach, where presumably camera, image-mapping and GPS are combining to identify relevant apps and information does all the work for the user, minimizing the need for training or, for that matter, proactive input. Google’s ideas seem aligned with all the necessary factors for innovative UIs.
But what of Google’s track record in user interface design? Android, Chrome and Gmail are competent implementations of principals invented elsewhere. Google’s UI leadership example comes from search. There’s no question that Google taught the world how to navigate the web through hyperlinks resulting from typing in one or two words. Google has de-emphasized approaches such as Q&A (Ask, Quora), faceted results from multiple filters (Best Buy), or visual cueing (Search-cube, Grokker) in favor of “guessing right” in the fastest manner possible. That explains Google’s ham-handed attempt to integrate its user and social media data to personalize search results.
So Project Glass aligns with the critical UI factors and it plays to Google’s strengths in user interface and its data, mapping and communications expertise. Apple’s own concept video for the “Knowledge Navigator” debuted in 1987, but it was set in September 2011. The company showed the iOS 5 iPhone with deep Siri integration in November 2011. I don’t think it will take Google 24 years to show results from Project Glass.