What would the perfect news application designed for Google Glass look like?

To say there’s a lot of debate about the “wearable technology” known as Google Glass (s goog) would be an understatement. Some enthusiasts see it as the future of mobile man-machine interfaces, while others say it is more likely to be the new Apple Newton — in other words, a widely-hyped product that will ultimately fail. But let’s assume some form of head-mounted display becomes commonplace: how will it change the way we consume content, and how will news outlets of all kinds have to change the way they think about what they do?

Google showed off some prototype apps at the South by Southwest interactive festival that it came up with for its virtual display, including interfaces for photo-sharing and other services that either used voice commands or touch menus that rely on the device’s touch panel (which sits on the side of the headset). One of the apps it demonstrated was a New York Times app — designed by a developer at the newspaper — which mostly just pulled up headlines, but also allowed the user to ask for the story to be read aloud.

Voice interface, real-time, location aware

The voice interface for Glass is one of the obvious differences between it and other devices, although both the iPhone and Android phones support similar features for specific tasks via services like Siri and voice search. The need for audio input with Glass is driven in part by the size of the display, which is probably one of the most significant limiting factors when it comes to content: since it projects only a small virtual screen, there isn’t a lot of real estate for images or large chunks of text.

So what would the perfect news app designed for Glass look like? What follows are a few ideas I came up with — feel free to add your own in the comments:

  • Short excerpts: If you have limited real estate, then you need to be concise, so a headline and a short snippet of text would be ideal — at least as a starting point. In addition to Google News, there are already a number of services that are focusing on this approach for mobile devices, including Circa and Summly (Circa will be part of our startup showcase at paidContent Live on April 17). Theoretically at least, news-wire services would be best equipped for this kind of content.
  • Real-time updates: In addition to concise summaries of news stories, Circa also offers another interesting feature that would be very useful for a device like Glass, which is the ability to “follow” a story and get real-time updates as they arrive. In a sense, this would be like a news-specific version of Twitter — very short, real-time and likely curated or filtered by an editor, whether a human being or an algorithm or both.
  • Designed for voice and touch: As the Google prototype shows, voice is going to be an obvious interface for Glass, and using the touch panel will also be important way of interacting with the content. That means a news app that can be navigated via spoken keywords (next, more, etc.) as well as one that is segmented in some way so that chunks can be chosen quickly and easily with a tap. This would require news outlets to do a fair amount of work with metadata and tagging of their content.
  • Location aware: To me at least, one of the most interesting aspects of a mobile device like Glass is that it knows where you are, and thanks to Google’s image-recognition technology, in many cases it even knows what you are looking at. The potential for adding useful information is huge, and Google has provided a glimpse of what that might be like with its Field Trip app, which adds “augmented reality”-style data. News updates and archives could be a significant source of useful information about specific locations, events and objects.
  • Prescriptive data: In addition to Glass, one of Google’s more interesting pieces of technology is Google Now, the dashboard it provides on some Android platforms (and may be bringing to iOS) that pulls together information from a variety of sources — calendar, email, photos, traffic — to tell a user what they need to know. Robin Sloan and Matt Thompson envisioned this kind of content in 2011 as part of a future in which heads-up displays appear on objects like mirrors, photo frames and eyeglasses.

Not just news, but useful information

The migration of content to mobile platforms like Glass — which in many ways is just part of the ongoing evolution begun by mobile phones and tablets — poses a number of challenges for traditional and even new-media outlets. The technological know-how to take advantage of Google’s APIs, and to structure and tag content with metadata that will make it useful, is one challenge.

Another challenge is the ability to think of information in different ways: not necessarily just as “news” but specific kinds and formats of news, or even more broadly as simply “useful information” for someone wearing a mobile device. This isn’t something that most traditional media outlets are used to thinking of as important, but they are going to have to start doing so.

That’s not to say every news organization has to suddenly divert resources to the creation of content for Google Glass or other heads-up displays — but it does mean they need to start thinking about what it would involve now, and transforming some of the ways they produce content to take advantage of it. Not only will those skills will be useful for all kinds of mobile devices, but if they don’t start the evolution soon, Google will fill the data gap itself and they will be left on the outside looking in.

Images courtesy of Flickr users Thomas Hawk and Arvind Grover