When I read over the patent filings related to Apple’s 3-D interactive, hyper-reality displays that MacRumors is writing about today, I felt a little jolt of excitement wholly unrelated to the two cups of espresso I’d already consumed. What can I say? I’m a display nerd. And I think one of the areas most ripe for innovation is how we see the information we’re trying to consume. In other words, it’s time to start messing with our monitors.
Think about it. The web is rapidly moving to video from text and becoming increasingly personal, yet we’re still viewing it on a flat screen — sometimes two or three flat screens. The most prevalent computer of the next decade — the mobile phone– sports total screen space of some 4 inches. What if instead of merely viewing something on the screen, we could also interact with it? The basic building blocks for such an experience already exist in the form of gestural controls and touchscreens that use embedded cameras, as well as LCD screens with built-in optics to sense touch. And we have the processing power and the projectors.
So I’m hopeful that sometime in the next five years we’ll see the realization of the technology covered in Apple’s patent, which involves using a video camera to help the display react to the user’s position, or something similar. The office environment seems the most likely home for the initial innovation given the battery constraints of a cell phone (even a pico projector attached to a cell phone sucks the battery in no time). Already better visualization tools are used in the supercomputing space, but they consist mainly of several eye-popping monitors.
Frankly, I don’t want a screen; I want a projector equipped with a camera that allows me to move about my office to tackle different projects, conversations and news streams. That way I can orient myself physically in my work. Plus, a presence-awareness system could figure out what I’m working on based on my position and update my status for me.
Yeah, it’s far-fetched, but we have the cameras, the software to visually track people and ways to translate those movements into code a computer can understand. It would give our powerful CPUs something to do, while our graphics chips render the information on a wall, holographically or on a large curved display that spans a person’s visual field.
If adding a second display increases productivity between 9 percent and 30 percent, imagine what one could do if that display were both interactive and intelligent enough to figure out what you’re doing or alert you to information you need to know. If we’re gonna navigate our world in real time, we’re gonna need a better cockpit.