8 Comments

Summary:

As revolutionary as the mobile ecosystem is, it’s the interactions of more intelligent connected devices with people outside the context of phones or computers that will drive more innovation says Mark Rolston, the chief creative officer at Frog Design at an event on Monday.

HEADSHOT_M.Rolston

As revolutionary as the mobile ecosystem is, it’s the interactions of more-intelligent connected devices with people outside the context of phones or computers that will drive more innovation, says Mark Rolston, the chief creative officer at Frog Design. Rolston, speaking at the Mobile Future Forward conference on Monday in Seattle described a future where devices become more contextually aware, thanks to embedded and connected sensors.

Instead of thinking about the buttons on a phone or a laptop, manufacturers and designers need to think about what will happen when computers are embedded in everything and connected all the time. Instead of computing’s being confined in a box on a desk or in the hand, computers will be everywhere, pulling data from a variety of places. Understanding how those computers will pull information about their environment, relay that data to users and then interpret what users want them to do creates a web of interaction that will require new ways of thinking and design.

In fact, user interaction might be a very minimal part of the overall design. For example, Rolston described a wearable glucose monitor that has elements embedded in the body, a monitor interpreting the data from the user’s bloodstream and a wearable screen for the patient to interact with. Of those three elements, the patient input screen is likely gathering the least important information and must convey complicated information simply.

In a conversation after his panel, Rolston explained that the challenges inherent in designing interfaces in such a world will come from devices trying to understand a user’s intent, as we build out new ways to interact with them, such as motion. How will a machine know when someone waving their hands while they talk to a friend becomes someone trying to tell a computer to do something? Of course, when a device can watch us and interpret our movements and commands effectively, it essentially gives computers the illusion of humanity. That’s the illusion Rolston apparently is trying to create.

  1. Yay! You’re so forward thinking! Too much maybe?

    Share
    1. No way! It is happening already. I’ve learned to embrace it :)

      http://www.whoisdanfonseca.com

      Share
  2. Angelina Christopher Tuesday, September 13, 2011

    Nice Post, Interesting to Study ……. Thanks Stacey Higginbotham …. Also I have just found this check it out http://www.saqibimran.com/

    Share
  3. Seems GigaOm is already bored of mobile and cloud stuff, so what’s next (give us our brain that much needed daily dopamine hit).

    Share
  4. Fascinating question, and it touches on another (which interests me more): how do we design this Internet of Things to augment human intelligence rather than replacing it? Any dystopian can read about the architecture of such a system and paint a very scary picture — and it isn’t hard to imagine a world of hyperintelligent devices creating a human world that looks a bit like “Idiocracy…”

    A more optimistic (and probably active and engaged) type could project forward and imagine this information-driven world as a kind of enlightenment, in which people understand more, know more, and can make (their own) better decisions thanks to their ability to process more data.

    Design has meaning in that context, doesn’t it? You can have a touchscreen with two big dumbed down icons telling you what to do, or you can have some kind of interface that speaks to the intelligence of the user — maybe even, for that purpose, getting to know the person. Did Rolston (or anyone else at the conference) speak to that?

    Share
    1. I would not worry to much about replacing human thinking, augmenting is much easier.

      All problem solving animals have some form of visual self awareness(Magpies,Dolphins, Elephants, Apes ….). Elephants also seem to process cognitive self awareness.

      Point is machines will get some form of self to solve problems but it will be different from ours or animals for that matter.

      The meaning of “mean” (mean [math], being mean, X means Y …) “arrives” out of context. Context is organized data …. Humans become testable visual self aware at around 18mnth, should give you an idea how “easy” it is and what is based on it. In other words it’s pretty much a point/data organization problem.

      Share
  5. Yeh maybe think to much into it, I think think the design prospects are changing that dramatically.

    Share
  6. maturinuk reblogged this from Broken_Heart Blog and commented:

    With the developers and innovators struggling to understand social media, no wonder doctors …

    Read More

    Share
  7. Interesting story, the way internet and smart devices have become so vital in people’s life its just has become necessity like air…

    Share
  8. How will we design products for the Internet of Things? http://t.co/kKk5fPrY

    Share
  9. How will we design products for the Internet of Things? http://t.co/eBcX2UTw

    Share
  10. How will we design products for the Internet of Things? — Tech News and Analysis http://t.co/nHF7cnnC (via Instapaper)

    Share

Comments have been disabled for this post