4 Comments

Summary:

The smart home won’t be built using apps and connected devices. To truly embed computing into our home environment we need better computer vision, projectors and a new understanding of computing.

frogdesign_frog_roome_hero_0

Today home automation works via remote controls or smartphones, but most people agree that there are intermediate steps before we get to voice or gesture controlled homes. Within the decade every smart home should have an always-on device sending information about who is in a room and where they are in relation to other people and objects in the room.

After visiting the frog offices in Austin, Texas on Thursday to chat with Mark Rolston, frog’s chief creative officer ahead of our chat next week at our Roadmap conference Nov. 6, I was convinced that always-on computer vision will be an essential element to get us to a different style of computing. Computer vision is just a tool — the actual application will be the ability to interact with computers in a variety of places in your home where and when it makes sense for you.

infoprojection

For example, you might be standing in your kitchen and deciding to order take out. You say, “Let’s order take out,” and on the kitchen counter a list of local Yelp recommendations will appear. Your spouse might see the list and want to order Chinese, so the list branches out to show only Chinese options. You might want Italian, so you can ask for that. Then you two can confer or invite the kids to take a vote. If you move to the living room to ask the kids, the projection will appear on the TV or the coffee table.

From there you can say, “Let’s order Romeo’s” and see a menu or call up your last order and just get that. In this scenario no one pulled out a smartphone. No one opened a laptop or pulled out a tablet. And the components to make this possible are mostly here today.

Those aren't Halloween masks, it's how the computer sees the frog designers' faces to see if they are looking at it.

Those aren’t Halloween masks, it’s how the computer sees the frog designers’ faces to see if they are looking at it.

Computer sight is a necessary ingredient

That’s what I went to the frog offices to see; how to build such a vision. The frog team calls its set up Room-e and it’s basically a Microsoft Kinect and a projector with a computer running some amazing software. The Kinect provides both the video camera but also an array mic that also contributes to the computer’s “sight.” Using the set up I could point to a light and it would turn on. That’s much nicer than using an app or getting up to turn a switch. (We’ll also have the designer of the Xbox One at Roadmap, Microsoft’s Carl Ledbetter).

Computer vision isn’t like human vision; a computer can “see” using disruptions in sound waves, extrapolating via footfalls or even disruptions in wireless networks. So while an always-on camera might discomfit people, it doesn’t necessarily need to be a video feed of what’s happening in the home. Early versions of such implementations, however, probably will be.

frogdesign_frog_roome_previewFrog’s current version uses the original Kinect camera for the Xbox 360, but higher resolution on the newer Xbox One Kinect will allow them to build software that recognizes more subtle gestures and facial expressions so interacting with your lights won’t require an Emril-like level of enthusiasm.

Rolston showed me a model of the cameras and/or projectors built into a light bulb that might turn out to be a way this is implemented. Another option is a mic and a camera embedded into a light switch (pictured left).

What about voice?

Of course, vision isn’t the only user interface. Voice will play a crucial role in how we interact with the smart home, from calling for takeout to saying “Goodbye” and turning off the lights. The key will be having both, so you can interact naturally with your environment without feeling like Jean-Luc Picard giving commands to the USS Enterprise.

Voice is further along in terms of the hardware — check out the recently announced thermostat from Honeywell that you can control via voice (Anthony P. Uttley, VP and GM at Honeywell will be showing it off at Roadmap during our conversation) or this demonstration of real-time translation from Microsoft. However, computer scientists still need to work on helping computers understand not just the words themselves, but the context of those words. That’s where deep learning research comes in.

In the meantime, Rolston says having a human on the other end of these commands might bridge the gap. He compares it to the OnStar service where both computers and people help with translation and providing the service.

Projection and putting it all together

backyard

The key element of the Room-e vision is that the computing doesn’t happen on computers and that it can follow a user based on their orientation and needs. While the frog guys glossed over the software and technical difficulties of actually building out apps and services that can take advantage of a multi-device experience, they think projectors are the way to escape the tyranny of the screen.

Rolston points out that projecting images allows people to use computing in different ways, like putting a recipe for a meal on the countertop and letting someone page through it there, where his or her messy hands won’t get a screen dirty. An essential element of this strategy is that people can use their existing stuff in their home, as opposed to a specialty table. The image I saw (above) was projected on a white table and frog hopes to develop the technology to work on wood and other surfaces.

To me, the demonstration was real enough that I went home frustrated with my current apps and automation, wishing for a computing experience that was less about the computer or even a specific device. While there will be privacy concerns associated with computer vision, the application of this technology in the demo was such that I felt that same frisson of “magic” that Apple has reliably produced with its products.

Maybe Cupertino should focus on commercializing computer vision and projectors for the masses.

RoadMap 2013

  1. The problem with this kind of thinking is that it ignores the reality. It’s a thinking stimulated by the desire to sell more and more devices.
    Cost and convenience are fundamental.
    It is far better to have your PC (as in phone or glasses) handle most of the tasks.
    You don’t need displays everywhere if you have full vision glasses , you don’t need cameras all over if those glasses interact with the other devices. Sure we don’t yet have those kind of glasses but we have phones and it’s something that doesn’t require more expenses .
    A vision where we have lots and lots of hardware around the home is far too costly and would waste a lot of power.
    The smart objects must be only as smart as they have to be , to keep the costs down. There is room for a robotic “butler” but other than that things must be kept just dumb enough.
    You can’t change the world if you don’t keep things simple, efficient and cheap.

    Share
  2. It would be interesting to see how this vision would work in a real home environment with multi-media interference, kids and animals running around, multiple individuals talking at the same time, food and all kinds of articles on the “display” counter. While this vision will almost certainly lead to improvements in the customer interface, moving to a smart home environment will ultimately require a clear customer value function that offers a little more than just collaborative ordering of take out.

    Share
  3. Cool!

    Share
  4. The NSA will love this stuff.

    Share

Comments have been disabled for this post