Blog Post

Through the Google looking glass

There have been two major transitions in personal computing since the introduction of the PC. The first was the transition from the terminal/command line interface to windowed user interfaces. The second was the transition to gesture-based mobile devices. Both transitions reordered the industry, destroyed some companies, and led to the creation of new giants. The transition to active eyewear is no less significant. Once the user interface is perfected, it will allow people to — for all practical purposes — communicate telepathically.

It’s too soon to say how people will choose to use this and which applications will ultimately prove popular. Right now, we’re a year before the introduction of windowed interfaces. For example, augmented reality experts were quick to point out that Google Glass will be a poor augmented reality experience. That may be true, but its likely users will discover they are good for other applications, such as controlling music, navigation or ambient notification. At this stage, Google should focus on perfecting the platform and be prepared to go through several versions before we worry about which applications will be the winners. Leave that for entrepreneurs to sort out.

Active eyewear is a platform play, not a product. Eyewear, as with clothing, is a fashion accessory where individual tastes drive purchasing decisions. Fashion is, to put it politely, not a strong point for any technology company besides Apple. This is an opportunity for eyewear designers and manufacturers, such as Oakley, to bring glasses back into style and to offer consumers a diverse range of products to choose from. The winning strategy for Google will be to focus on enabling technologies such as micro projectors and short-range wireless, to make these components as small and easy to integrate into designs as possible, while leveraging Android as a development framework.

There is also an important demographic trend that nobody has mentioned. The kids who grew up with the Web in the 1990s are now turning into middle aged adults, most of whom will soon need reading glasses. The population everywhere is aging, and while glasses may not currently be in style with the generation who grew up with contacts and LASIK, fashion trends change, and can change quickly. In the near future, eyeglasses will no longer be a sign of disability, but rather of augmented capability, in short, a status symbol as well as a useful tool.

Brian McConnell is an inventor, author and technology entrepreneur based in San Francisco. He is the founder of Worldwide Lexicon, an open source collaborative translation platform.

Image courtesy of Flickr user i eated a cookie.

9 Responses to “Through the Google looking glass”

  1. Mike Adams

    If you combined these glasses w/Leap’s Kinetic type contols you could do real world gaming. The glasses would make other players look like thier characters from World of Warcraft for instance. Youi could battle them in the real world but you would be “seeing” thier avatar in the real world. It would move when they moved. You could “see” the weapon they wield and use the real world terrain as your battlefield. Right?

  2. ishekhar

    Agree with the article.
    The world doesn’t’ need to wait another 15-20 yrs to jump from one interface to another.
    and its about time the mode of interaction changes from fingers to speech; something which is more natural to humans.

    • >>and its about time the mode of interaction changes from fingers to speech; something which is more natural to humans.<<

      Is that why it takes us decades to learn to speak in a manner that is precise, compact, and unambiguous? Or why we find it easier to draw (and read) maps rather than just speaking (and listening to) verbal directions? Why a picture can often be worth a thousand words (and assembly instructions are drawn rather than typed)? Why it's more useful to work with the text of a speech rather than an audio recording of it?

      Given human physiology, speech strikes a convenient balance between transmitting and receiving information without the need to resort to additional tools. It's both less natural and less powerful than touching and pointing.

  3. Android is not very well suited for that kind of environment, nor is iOS. They are based on reactive UI/systems assumption, noticed the confirmation dialogs? Why would a proactive system wait until you look down the Underground stairs until it realizes uuups doesn’t work.

    But proactive systems have to be able to handle simple questions like “WHY?”. Try that user friendly with a system based on black logic, the system doesn’t know anything about what logic the programmer used.

    IMHO in your eyes interfaces will be very different from point and click to touch based on gestures, neither iOS {Siri} nor
    Android has done any particular inventive things to get there. Can one test proactive interfaces on Android? Sure, but it’s a pain in the back, iOS … everybody at Apple would most likely end up with a sudden cardiac arrest.

  4. “The transition to active eyewear is no less significant than the two major transitions in personal computing.”

    And how does the author know this?

  5. A new technology has to make it easier to do something. Not in idea, but in practice. I can see how a thought-controlled device plugged into my optic nerve could make it easier to do things. I am not sure I can see how AR glasses, whose form factor, given present technology, is inconvenient for input, processing, and output, can do the same.

    This is at least 10-15 years out. Like IBM’s Simon to Apple’s iPhone.

  6. Cookie eater

    Just browsing my tech news feed and see a photo of my cat, Panda Bear. Made my day. I love the internet. Also deodorant with rubber band around it? Why would I do that. Anyway I wish I had something relevant to say. I do think that google glasses will improve slowly, regardless of how ____ the first implementation is.

  7. Martin Edic

    There’s a couple of glaring problems with this theory. Google has no success stories with hardware at all (they did not design the Nexus phone) and miniaturization is not a simple challenge. Where for example is the battery for those glasses? Have they invented some super secret micro-battery technology? I don’t think so.

    • My assumption is that Android smartphones will serve as a base station for the glasses, so the long range communication, etc will be done there. Google showed with Android they could compete in mobile, so while they don’t design hardware, they are good at providing enabling technology. I expect the eyewear design will be done by companies like Oakley, with active components provided by companies like HTC. The point is it’s coming, and it’s too soon to predict the particulars of which applications will be winners, or which companies will be winners (though Google is doing the right thing by experimenting, they may be shown up by someone we haven’t heard from yet).