4 Comments

Summary:

Imagine going from having only one or two senses to experience the world to five or fifty. That’s what is happening in human-to-computer communications as we develop new user interfaces.

Photo by Signe Brewster.
photo: Signe Brewster

We have gone from primarily using a keyboard and mouse to interface with our computers to adding voice, touch and a bit of gesture-recognition to the mix. And while there will always be a special place in my heart for bunch cards, switches and other more primitive interactions, we’re about to blow the founding four interfaces up in a crazy way (GigaOM Pro subscription req’d) thanks to a variety of research that basically wants to open up the communication pipeline between people and computers.

From the Leap Motion to the Myo arm band that measure muscle movements we’re getting access to a varied world of new ways to interact with computers. Perhaps you’ve seen the Skinput project that turns the human arm into a keyboard? Or played with a projected interface that lets you put a touchpad on any surface? What about the Reemo I saw demoed at Solid last week, that turns an arm motion into a means of controlling your VCR?

Here in my hometown of Austin, Plum Lighting is working on a light switch that let’s you create gesture-based inputs for any lighting setting you want. Want a section of Hue lights to turn pink? Trace the letter P on the pad and it happens. The whole field of UI is exploding, which is why this profile in the IEEE Spectrum magazine of some work on the WorldKit effort at Carnegie Mellon caught my eye, especially this section of how we’re going to be surrounding by interfaces wherever we are.

In fact, future consumers may choose from many different kinds of interfaces, mixing and matching to satisfy their style and fancy. Other researchers are now investigating smart glasses that track eye movements, wristbands that capture hand gestures, tongue implants that recognize whispers, even neck tattoos that pick up subvocalizations—the subtle vibrations your throat muscles make when you imagine how a word sounds.

As a consumer I’m still wrapping my brain around the best options for different parts of my smart home, but if I were a developer I’d be both excited and overwhelmed by the opportunities these UIs present. Libraries that make it easy to translate a single computer-recognized meaning across multiple UIs might become very popular in the future. Or maybe as users we just program our personal faves for each task. What do you guys think?

  1. More and more I realize that we should probably just watch Star Trek and see what they did. e.g. http://www.globalnerdy.com/2014/01/06/mobile-technology-as-predicted-by-star-trek/

    Reply Share
  2. Aside from wondering who uses a VCR (assume you mean DVR, but even then…) I wonder why no mention of mobiles. They use touch, kinesthetic gesture, gaze-detection, location, ambient light sensing, etc., etc., etc. TODAY.

    Users already mix and match by picking the device (and method of interaction) they feel best meets their needs at the moment.

    Reply Share
  3. John Kestner Thursday, May 29, 2014

    Overwhelming is right. Futuristic UI sells, but when multiplied across products trying to integrate into our environment, it becomes unusable. Even “natural interfaces” are not actually natural, but mnemonics at best. The canonical invisible computing example is eyeglasses, transparently augmenting our abilities (without getting us kicked out of bars). We’ve got a long way to go before the tech is truly forget-about-it.

    Reply Share
    1. Nicholas Paredes Thursday, May 29, 2014

      I don’t think it’s about futuristic interfaces at all. Most of our present interfaces are not very useful in the scheme of things. Imagine interfaces that grow with you. If I only use basic Office functionality, why do I have the very real burden of an excessive interface to get in the way of work?

      Interfaces will evolve in the future. I gave a talk on mobile design a month ago, and said that in the future the devices will build the apps on the fly. Perhaps there will be guidance for most of the big companies. But, if I just want to check my balance on a checking account, I should be able to generate an app to do that.

      The future is smarter and simpler than you believe.

      Reply Share