Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
Over the next decade or two, everything that can have connected digital technology injected into it, will. Today’s smart watches and smart shirts, such as adidas’ miCoach Elite, will become ubiquitous as will adaptive technology such as connected cutlery like Lift Labs’ spoon for measuring and correcting Parkinson’s tremors. The trajectory is clear each person will have hundreds of connected devices in their life.
That’s the good news. But we can also see enough of this future to see that managing a multi-device, software, and behavior world will require rethinking how we design, perhaps from scratch.
Multiple device system design
Our ability to manage multiple devices, each with its own software ecosystem, interface, and quirks, is reaching its limit. There’s only so much attention we can pay to our devices, and as consumers we’ve started to leave a wake of unused digital things that are a little awkward to use, recharge, wear, or carry around.
Moreover, many new devices involve user experiences that move across devices: a sensor synchronizes to a server that an app contacts to visualize analytics. There’s real value in creating new device categories, just as Nike did with the Nike Plus, but instead of making consumers learn new device ecosystems, we need to think about offering services. To do this, designers have to learn how to create experiences that smoothly span multiple devices.
Synchronized cloud-based services like Netflix, Dropbox, and Angry Birds provide a glimpse of how device-spanning design could feel. I can grab a file on whatever device is synched to my cloud service. I can pause a movie on one connected video display, and unpause it on another. I can continue the game I started on my phone on my TV. In all these situations I don’t really care about the specific device because it only serves as a frame for the thing I really care about, the service.
Designing the multiple device system experience
How do you design a single service that can appear as an app, as a data visualization, as a specialized device, or as one of a dozen different hardware platforms we haven’t thought of yet? Current models typically fall into two broad categories: standards and vendor lock-in. Complex “do everything” standards make implementation difficult, require heavy configuration (I suggest looking at media sharing, which is way more complex than it should be, despite dozens of standards), and lead to a frustratingly inconsistent user experience that doesn’t scale well.
In vendor lock-in everything works as long as buyers stay in a single company’s proprietary system. It only scales when the vendor scales it, which may be desirable from the perspective of the company, but is unrealistic to consumers.
As a consumer and designer, neither approach has enabled sophisticated multi-device experiences. Companies still chase two-screen experiences… but what about the twelve-screen experience? The “twelve-screen, forty embedded sensors, ambient display, nearby camera drone and car” experience? The traditional models—let’s call them “linear”—may have been appropriate when we had a couple of multipurpose devices, but they feel too limited in a world of many connected devices.
Designing large-scale, multi-device service interactions is more like planning and running a farm rather than setting up and operating an assembly line. Let’s call this non-linear approach “emergent.” Essentially we need to let go of the desire to tightly control functionality at the micro level, ditch the tools that stem from those assumptions (our popular programming languages, interaction design methods and software development environments), and focus on creating tools that induce large-scale behaviors at the macro level.
Emergent behavior in multi-device systems is not a new idea. Cellular automata and intelligent agents have been tried as inspirations in the space of multi-device experience design, and multi-device experience research goes back twenty years or more. General Magic’s visionary Telescript language tackled related questions in the early 90s. However, most of these projects focused on low-level functionality, which is a distraction.
What we’re missing is a high-level approach for creating emergent multi-device user experience. How can we easily tell an ensemble of (perhaps arbitrary) devices that we’re interested in achieving a certain result, and have them trend toward a positive outcome close to what we’re hoping to achieve? That’s exactly what we do when we ask a friend to make dinner, or we plant a garden, or we start a project where we don’t know all the steps. That is, in fact, largely how we manage our everyday lives. We expect that the world will be imprecise, but won’t fail catastrophically and will allow us to engage in a dialogue that iteratively guides the result in roughly the right direction.
What does such a system look like? There’s an inspirational class of software that demonstrates this kind of interaction well: “god games” like SimCity. In these games the complexity of the simulated environment is so high it’s impractical, or impossible, to control all of the components. Instead, players cultivate an emergent system using a limited set of tools and hope it moves roughly in the desired direction. Obviously the entertainment of these games is that the system that emerges does not behave as desired, and requires management.
The outcomes might not always be entirely automated, for example a cleaning service can ask “Do you mean here?” when it’s not sure which of several objects you’re referring to, and you can answer by pointing and saying “no, on the bookcase.” (That would also be a more graceful way for devices to negotiate; Sims just burn down the town) The role of user experience designers in this situation may be to use these same emergent behavior tools to create systems that are highly failure-resistant, so most outcomes will be positive most of the time. Of course some systems, say an IV drip controller, will need to be precisely predictable, but perhaps a hospital bed management system can be built as an emergent system that responds flexibly and occasionally requires a nurse to say “not right now, come back later” to a sheet changer that showed up at the wrong time.
Giving up control
Sure, the emergent model means that sometimes things won’t work out. But it’s not like the linear model doesn’t produce substandard results too. The assumption that we need to precisely identify and control all of our devices at all times leads, counter-intuitively, to less control over our daily lives. That’s the predictable result when designers assume users have the inclination or time to do all the work of creating and managing complex device ensembles.
Some experts may enjoy that kind of micromanaging attention, but we shouldn’t assume that everyone does. Instead of starting from the worms-eye view of a single device that needs constant help to find and work with other devices, we can start from the god-game view of a field of devices that needs to be cultivated. If we do that, we can start to see connected information processing devices; not as computers that need to communicate, but capabilities that will be used as needed. And we can start to see problems not as failures that collapse in a pile of incomprehensible error messages, but as the start of a conversation.
Special thanks to Elizabeth Goodman for her comments on an early draft, inspiration from Steven Johnson’s Emergence, Ben Cerveny’s thoughts on games and cultivating technology, Scott Jenson’s thoughts on discovery, and Luke Plurkowski’s on conversations.
Mike Kuniavsky is the principal scientist, innovation services at PARC and will be speaking at our Mobilize conference Oct 16th and 17th in San Francisco.