Consider this. An alarm clock that knows when to wake you up without being set. A coffee machine that knows when you will wake up late and brews coffee (the way you like). A refrigerator that knows you’re out of mayo and bread, and orders them for you.
Better yet, a network of such connected devices that all understand your exact likes, dislikes, allergies, target health habits, schedule and the-other-most-relevant-artifacts-about-you to do exactly the right thing for you on your behalf And do so without crossing the line on privacy. And do so in real time.
Sounds like the Internet of Things? Like ubiquitous computing? Like context awareness? Like the perfect assistant? The answer is yes, yes, yes and yes. And that is a bit of a problem.
In more than one discipline, such über use cases have been under discussion and research for a long time already. Recently, Jennifer Healey discussed her vision of the Internet of Things at the annual [email protected] event, where she talked about how Weiser’s vision – that the future of ubiquitous computing was based on devices that “disappear” – was off in a crucial way: She pointed out that we neither want dumb devices that only understand “yes” or “no” questions, nor do we want devices that are too smart for our own good.
In theory, many of us would want to have Watson infused into every device we touch or use and even into the environments we walk into. But in the pursuit of such grand visions, we’ve spent over a decade wasting opportunities to connect with the user via simple, incremental ways.
Think instead then of an alarm clock that noticed an early meeting on your calendar the next morning and asked if you would like to wake up earlier than usual. Despite the fact that it didn’t automatically figure out whether you plan to be attending that early meeting or if you plan to be attending it remotely from home, it adds value by simply reminding you of the things to be done. It’s a crucial difference. A refrigerator that asks you whether you would like to order bread and fulfills that action on your behalf is still useful, even if it did not figure out if you were going on vacation the following day.
It is still early days
The last five or so years have seen an explosion in the use of sensors in connected devices, as well as an exponential improvement in the maturity of contextual technologies, such as location. It’s still early and it’s understandable that we don’t have a network of all knowing digital personal assistants that work for us yet. Yet given the level of mapping tools and sensors we do have, it is surprising that even somewhat obvious use cases like “automatically check for congestion along the route and suggest an alternate route when there is an incident ahead” are not yet a reality without having the user actively seek this information. Two decades after Weiser wrote about ubiquitous computing, we are only now venturing into experiments around this class of technologies.
There are many experiments already underway – Google Now, Sherpa, Donna, etc. – and these are all trying to accelerate our journey to the digitally smarter world. So, sooner than later, we will see more and more of these experiences coming to life. But, all of these experiences will need to start small, work with the user and build incrementally for success.
Focus less on all or nothing
The fundamental problem I see with these uber use cases around the area of Internet of Things or ubiquitous computing is that they try to achieve too much at once. It is a complex area: One that, crucially, requires a process of educating the user to trust many kinds of smart(er) devices. One where the devices have to work with a lot of disjointed and fragmented sources of data to really understand the user and their needs. One where life can get in the way and change what the user is thinking or about to do next (to keep our example: maybe, just maybe, you might decide to give up coffee one morning).
By focusing efforts on the pursuit of the higher-level vision that is “perfect,” it is often at the exclusion of many viable capabilities that aren’t so far-fetched right now. I contend that yes or no questions are, in fact, a perfectly fine starting point for these systems. Starting there does not imply that these systems won’t get better over time. Rather, starting there allows us to walk before we have to run. These devices, by acting at first as a memory-augmenting tool that prompts us for actions, can soon come to understand us in the process of working towards that perfect vision.
User in the loop is essential
Leaping to a world where everything is connected and every device knows precisely the right stuff it must know about a user is impractical. We cannot build these systems without getting help from the user. It needs to be subtle and it needs to be unobtrusive. Setting aside the need to take our time due to technical limitations, we need to take this approach to gain the trust of the users through transparency. We humans like to feel in control and be in charge. A system that makes us feel stupid or inadequate or useless is simply no good. After all, everything hinges on the user acceptance of these systems.
More than ever the time is right now for us to adopt these smarter devices and let the machines work for us. But, all along, it is about empowering us and not about replacing us.
The vision revisited
Back to Weiser’s ideas of building technologies that “disappear” and “weave into our lives.” It is a brilliant vision. It is the vision we need to realize. It doesn’t advocate ending with dumb interactions. It advocates a future that is smart enough to know when to interact and when to fade into the background.
We will soon be wearing and walking into computing that is all around us – surfaces that look exactly as they do today, but only transformed to be part of the connected world. It will be a revolution that allows technology to be a part of our lives, rather than the opposite.
Vidya Narayanan is an engineer at Google. Previously at Qualcomm and Motorola, she has been working on internet and mobile technologies for more than a decade. She blogs at techbits.me and on Quora. Follow her on Twitter @hellovidya.
Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.