Blog Post

With wearable tech like Google Glass, human behavior is now a design problem

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

For all the attention we’ve given in the past year to wearable devices like Google Glass, Nike FuelBand and Jawbone UP, the focus on hardware and form factor misses the far more thrilling – and perhaps frightening – topic of how wearable devices are going to change who we are as people. Wearables promise to let technology impact us on a more personal level, and as our gadgets become more intimate it’s inevitable that their influence will deepen.

Psychology researchers have been looking into human behavior reinforcement, and the conclusions they’ve reached are startling. The subconscious mechanisms by which a human brain forms habits are no longer a complete mystery, and that understanding has let us start devising tools for altering them. As a result, we’re now at the edge of an era in which human behavior has become a design problem.

Changing the unchangeable

In a 2011 article on feedback loops, Wired editor Thomas Goetz describes how a single “Your Speed” box on the side of a busy road does a better job of slowing down drivers than the most relentless speed trap, and then illustrates this effect with a number of other examples. They point to a kind of revolution in persuasion tactics: We are able to encourage or discourage behaviors once thought unchangeable simply by offering immediate, actionable feedback. Well-designed feedback changes behavior.

Imagine what’s possible when we apply that kind of feedback loop to a broader range of habits. Health-related behaviors, for example. Or even buying behaviors.

Always aware, always on

What truly sets wearables apart from earlier platforms is their sensitivity and their constancy. A computer in your pocket can keep track of where you’re going, but it will drain your battery in the process. But a computer on your wrist can track your steps, your heart rate and even your mood, and it can do this all day long. This gives it the ability to spot patterns, and to tailor its feedback accordingly. And because the wrist-mounted one occupies a constantly visible place on your body, it gains the ability to put information in front of you instantly, at almost any time – if your wrist-watch suddenly buzzed and displayed a message, you’d surely check it in a consistent way that can’t always be said about a smartphone.

It’s clear that wearables and feedback loops were meant for each other. Properly designed, a wrist- or head-mounted device could provide the kind of behavior-changing feedback that the “Your Speed” box does resulting in saved lives. Improperly designed, a similar device could reinforce actions that make us antisocial, paranoid or deadly, as the recent proliferation of texting-related accidents has made clear. Designers and manufacturers have the power to make either scenario come true.

“Cool or Creepy” is just the beginning

Recent conversations about ethics and wearable technology have mostly boiled down to Google Glass and its implications for privacy. It’s a valid concern, made even more apparent with the recent revelation of the NSA access to our phone records. After all, we’re talking about a near future in which a widely available device lets anyone record anything with minimal effort and little indication, and that violates a basic social contract in many societies.

But as Google itself has pointed out, there are plenty of other devices better suited to clandestine surveillance. And many technology users – younger consumers especially – have made it clear that they’re fine with sacrificing some privacy in exchange for a more personalized tech experience. The privacy debate needs to happen, but it’s ultimately a bogeyman that will fade once wearables go mainstream.

The ability to influence behavior, though, is a much bigger issue. So far, concerns with Google Glass can be summed up by the question “cool or creepy?” The question, asked in dozens of articles, implies that acceptance of wearables depends on your perception of new technology in general, and that, just like cell phones and automobiles, we’ll all get over it eventually.

The ethics of influence

But the ability to alter habits and behaviors in the long-term is more than cool: it’s revolutionary. And using that ability to encourage damaging behaviors isn’t just creepy – it’s sinister. A recent parody video imagining a Google Glass experience that’s been “enhanced” by AdSense ads gives us a taste of what’s possible, but the reality will probably be much more subtle. And potentially more worrisome.

How about a device that gives you gentle cues to spend money irresponsibly? Or one that creates a cycle of digital dependence that makes today’s concerns over texting-addiction look quaint? The real ethical debate is in deciding how we’re going to use wearables to influence behaviors, once we master designing the feedback they offer.

At the same time, any tool that lets us consciously modify our unconscious behaviors could save or improve millions of lives. Rates of diabetes and heart disease, for example, could be dramatically reduced using technology that teaches us to make healthier decisions about diet and exercise. Wearables could even be used to support habits that reduce accidents at the workplace, or promote the kind of social engagement that reduces crime and improves quality of life.

This is social engineering in its most literal sense, made possible by technology, with all of the promise and paranoia that phrase implies.  

For more thoughts on wearable tech, here’s a video our company put together using material from a recent talk:
[vimeo 67684462 w=500 h=281]

Ziba Panel Series – Wearables from Ziba Design on Vimeo.

Sean Madden is executive managing director of Portland-based design firm Ziba Design.

Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

13 Responses to “With wearable tech like Google Glass, human behavior is now a design problem”

  1. Great point: as wearables spread on the market, they are raising unrelated questions such as privacy or behavior influence.
    But I believe (and so does the OMsignal team) that wearables will have greater application than what we’re seeing now – a bit like the tip of the iceberg. The medical field will benefit greatly from it, especially for prevention or treatment (I wouldn’t say “cure” either).
    In this case, design is key. Patients are more likely to take on board an apparel that works for them, as opposite to something that reminds them of a condition.

  2. Great point: the arrival of wearables on the market is raising a lot of unrelated questions, like confidentiality and behaviour influence.
    But I believe (and so does the OMsignal team) that wearables have a greater potential than what’s to see now. It’s the tip of the iceberg, and they will definitely help with medical prevention or monitoring, in greater ways than we think. Great design is key to the development of wearables, as patients are more likely to take on board an apparel that appeals to them.

    • Sean Madden

      Thank you for your interest Charlie! Unfortunately, we had horrible AV issues and were unable to capture the audio for the panel. We believe we have resolved this moving forward and hope to be able to share more long format videos moving forward.

  3. Interesting article – some further thinking beyond the ability to social engineer decision making is ethics and morals themselves. Google and Apple have made certain moral/ ethical choices already with respect to their app stores – and what isn’t in them. Google took this further with its recent decision to shut down a Glass App that it deemed inappropriate (adult content). What is interesting about all of this, is the imposition of certain ethical/ moral choices on others – which challenges very basic freedoms. At what point do these choices stop? Who makes the decisions about what is ‘best’ for use of technology? and what if those decision makers decide that some what we now consider core freedoms are not in our best interest because they differ from the choices of the designer/ company/ etc… (freedom of religion, association, etc…)?

    • Sean Madden

      Really great points Paul. This reminds me of the battle we still have in some communities (in America, especially) around censorship in libraries and other public institutions. Of course, in other global communities this is an even harsher reality. With Google, Apple, and their competitors, they do apply a certain morality through their filtering of who has the privilege of participating in their app stores. The interesting part to think about is the reason behind this morality. I would imagine that they are more motivated to protect their brand than by any moral or ethical framework they wish to impose upon the populace. What will happen when they shift from more passively imposing their sense of morality onto their customers to a much more active method via behavior changing devices is anyone’s guess.

    • Sean Madden

      I’d never thought of that use case but it could be remarkably effective and relatively simple to implement. Imagine how much compliance would increase if there was a marker/indicator on the physician that her peers could see indicating the cleanliness of her hands. A public feedback system could be very effective here.

  4. Matt Miesnieks

    Great insights & I completely agree (in fact everyone @dekko does). What happens when the real-world is a 1:1 scale 3D model, and our wearable sensor feedback makes us characters in a real-world SimCity? If we thought the ability of analytics to drive social game behaviors was powerful, wait until they drive our behavior in real-world economies. We’re working on the 3D mapping & tracking technologies that are going to enable this and it’s far far closer than people realize. If we think smartphones are a big deal (and they are, never been anything like it in history) then wearables & the software/data platforms that enable the apps & services they power are going to make the giants of today look tiny in terms of the influence over our lives.

    • Sean Madden

      Interesting perspective Matt. When I think about this future and I think about influence, I always think about the early advertisers and their understanding of human psychology. It took some heavy regulation to remove the worst of it from their practice (and that’s only in the US). If we achieve this world you speak of, it would be fascinating to watch if the behavior of influential companies is similarly regulated should (blatantly) bad corporate behavior emerge.