The big Microsoft news today is surface computing and as a nearly full-time touch computer user, I couldn’t be happier. The last year of using a touchscreen Tablet PC has been blissful, but in reality, just baby-steps when compared to the surface computing announcement and its impact on the future. This year at the 2007 CES keynote, I got a chance to see surface computing in action as Bill Gates demonstrated it in the home of tomorrow. It’s not tomorrow yet, not for the home anyway, but it’s right around the corner for commercial applications and the home won’t be far behind. I’ve put some random thoughts on what this all means to me after the jump….enjoy, read, comment. Heck, reach out and touch ‘em if you want to!
- Some of us tech-addicts are already looking forward to having these in the home. We’re just not there yet in terms of the cost, size and practical applications. From a pricing perspective, the average household isn’t going to pay $5,000 or more for a 30-inch, glass-topped coffee table just to pick a playlist. That’s OK; I don’t see this as a negative, rather I see this as the reason Microsoft has partnered with companies that provide services to introduce surface computing. This also brings me to point number two.
- This first iteration of surface computing has very specific requirements. Some of the recognizable items are found through infrared-ink barcodes and other tags. How many of these types of items do you have in your home? If you didn’t say ‘none’, I’d either say, "I don’t believe you" or "You must work at Microsoft on the surface computing project". :) Again, that’s OK for now; I’m just trying to quell the consumer question of "when can I get this?", but it segues nicely into the next point….
- Computers are getting smarter all the time and that’s going to drive this type of product into the home. Eventually, mainstream apps and computers won’t need special barcodes, tags or other identifying mechanisms to determine what an object is. Imagine small cameras with access to a database of photos figuring out what you place on the surface. Pretend there’s an intuitive nature to computing that we haven’t seen used yet; use the glass on the dining table scenario. If I place a wine glass on the table, it can determine it’s not the right footprint or shape as a pilsner glass for example. Logically, it makes no sense to present a beer list on the surface, but instead presents a wine list…
- Touch truly does bring the word ‘personal’ back to computing. Granted I’m a little more into my devices than most consumers, but let me ask you a question: when you look at your laptop or desktop, what do you see: I’m guessing the majority of you see a machine, a tool, something that you use. With me, it’s different. I look at my UMPC as an extension of my activities. I’m constantly touching the device to interact with it. Far-fetched and odd as it sounds, we are a team to get through the day, my UMPC and me. [Yes, I full expect a wide variety of comments on this one....]
- Surface computing bridges the physical world and virtual world like nothing before. If you watch the video, you can see countless examples of object information exposed and those objects are real objects. Call it OOI or Object Oriented Information for lack of a better description. Place an object on the surface and you’ll see usable information about it; information that can interact with other objects or services.
Just my random thoughts that I spilled onto the page in < 20 minutes; first impressions and interpretations. What’s more important are your thoughts in this conversation. Even though this is a commercial application of surface computing, what do you think of the potential? Is this a big deal or is it just hype? Where do you see it in five years: gone or in your home?