Show Me the Display Innovation


The Stallion cluster at the Texas Advanced Computing Center

When I read over the patent filings related to Apple’s (s aapl) 3-D interactive, hyper-reality displays that MacRumors is writing about today, I felt a little jolt of excitement wholly unrelated to the two cups of espresso I’d already consumed. What can I say? I’m a display nerd. And I think one of the areas most ripe for innovation is how we see the information we’re trying to consume. In other words, it’s time to start messing with our monitors.

Think about it. The web is rapidly moving to video from text and becoming increasingly personal, yet we’re still viewing it on a flat screen — sometimes two or three flat screens. The most prevalent computer of the next decade — the mobile phone– sports total screen space of some 4 inches. What if instead of merely viewing something on the screen, we could also interact with it? The basic building blocks for such an experience already exist in the form of gestural controls and touchscreens that use embedded cameras, as well as LCD screens with built-in optics to sense touch. And we have the processing power and the projectors.

So I’m hopeful that sometime in the next five years we’ll see the realization of the technology covered in Apple’s patent, which involves using a video camera to help the display react to the user’s position, or something similar. The office environment seems the most likely home for the initial innovation given the battery constraints of a cell phone (even a pico projector attached to a cell phone sucks the battery in no time). Already better visualization tools are used in the supercomputing space, but they consist mainly of several eye-popping monitors.

Frankly, I don’t want a screen; I want a projector equipped with a camera that allows me to move about my office to tackle different projects, conversations and news streams. That way I can orient myself physically in my work. Plus, a presence-awareness system could figure out what I’m working on based on my position and update my status for me.

Yeah, it’s far-fetched, but we have the cameras, the software to visually track people and ways to translate those movements into code a computer can understand. It would give our powerful CPUs something to do, while our graphics chips render the information on a wall, holographically or on a large curved display that spans a person’s visual field.

If adding a second display increases productivity between 9 percent and 30 percent, imagine what one could do if that display were both interactive and intelligent enough to figure out what you’re doing or alert you to information you need to know. If we’re gonna navigate our world in real time, we’re gonna need a better cockpit.

Related Research:


Mike V

Much as I like the idea of wearable pico projectors, I think video eyewear has a much more promising future for display technology. Projectors are limited by consistent availability of a sizeable surface, and to some extent, lighting conditions, video eyewear isn’t, and will be able to generate screens with a much larger field of view. Projectors will still be a dominant peripheral for some applications, like for use in groups, at least until high bandwidth wireless and video eyewear adoption allows for easy screensharing). Camera’s can be integrated into either systems to take advantage of computer vision tech for analyzing the environment. Add sensors to the eyewear which can measure the direction of the head, and perhaps inward looking camera which can track eye movement, and you’ve got a pretty good system.

Stacey Higginbotham

Mike, that’s great for mobile. I like the power of being able to move around within my controlled home-office environment, but that’s not practical for cube-life or even in a Starbucks.

Comments are closed.