4 Comments

Summary:

At the IEEE Technology Time Machine Symposium last week I listened to the world’s leading academics, engineers, executives, and government officials project what the world will look like in 2020. The future brings technology together for everything from enhancing the human experience to improving environmental sustainability.

iStock_000011266341XSmall

Edit Note : This is the first of a two-part post. Part 1 will outline core devices and technologies, and Part 2 tomorrow will look at networks and systems.

I had the privilege of keynoting the inaugural IEEE Technology Time Machine Symposium last week in Hong Kong, where I listened to the world’s leading academics, engineers, executives, and government officials project what the world will look like in 2020. Their predictions were based on revolutionary technologies for processing, sensors, and displays becoming integrated into global systems that can do everything from enhance the human experience to improve environmental sustainability.

Predicting the future is a challenge, since its course depends on rapidly changing technologies integrated into large-scale systems whose acceptance will depend on human behavior, global demographics, and macroeconomic and political dynamics. Nevertheless, the IEEE Technology Time Machine Symposium helped provide a glimpse into the possible. As Rico Malvar, chief scientist of Microsoft Research pointed out, today’s innovative new products such as Microsoft’s Kinect require interdisciplinary collaboration. In the case of the Kinect, that collaboration spanned computer vision, machine learning, human computer interaction, speech recognition, and more.

The components

Intel's new 3-D transistors at 22nm.

I began my keynote by reviewing a number of disruptive technologies that are surprisingly far along. These include Intel’s “Ivy Bridge” Tri-Gate 3-D transistors, which are built vertically like a skyscraper instead of horizontally like a mall, and are being readied for production in 2012; quantum computers, which are no longer just a theoretical concept, but are being shipped commercially; and the long-theorized fourth circuit element, the memristor, now prototyped by HP, may find use in replicating the function of the human brain (sub. req’d). Plus chips aren’t just for processing or memory: Wouter Leibbrandt of the Advanced Systems Lab at NXP Semiconductors, stated that NXP’s new sensor chip has the power of the original Pentium chip but fits on the head of a pin: beginning to make the possibility of “smart dust” sensors a reality. All of the technology means smarter processing power will be faster, smaller and cheaper.

The devices

Displays are getting thinner, lighter, higher-resolution and more power-efficient, using various approaches such as OLED and e-Ink. Experts such as Prof. Hoi-Sing Kwok of Hong Kong’s University of Science & Technology (HKUST) were confident that transparent, flexible, color touchscreen displays are, well, on a roll and just around the bend with existing prototypes continuing to improve.

If you like HD, just wait. While today’s 1080p displays have a resolution of 2 Megapixels (1K x 2K), 35 Megapixel displays have been already been fabricated, 100-Megapixel tiled displays are commercially available, and 287-Megapixel tiled video walls have been constructed.How much is enough? Kwok has calculated that a medium-sized room fully enabled with video walls at the resolution of the human eye would need 3 Gigapixels, 1500 times today’s HD. Such a room might be useful for viewing HKUST’s record-breaking photograph, which is over 150 Gigapixels.

One surprising challenge in building large displays is that distributing TVs at an economically attractive scale requires using today’s transportation infrastructure, limiting the size of the glass to one car lane wide and short enough to fit under an overpass. However, wall-sized flexible displays could be rolled up, shipped, and carried through the front door.

While today’s 3-D approaches have an uncertain future, Kwok believes the most promising 3-D display technology is electro-holographic (picture Princess Leia’s “Help me, Obi-wan.”) A challenge for large, high-resolution displays and electro-holographic displays is not just the display itself, but the processing power required to drive it. Moore’s Law and the technologies I reviewed above should help. Large images may not require large devices; Kwok expects every cell phone to have a pico-projector—a laser projector that can project onto a surface larger than the device—incorporated, the same way that every cell phone now has a camera.

It’s not news that touch screens are becoming popular, but the next enhancement will be “hover” touchscreens, enabling gestural interfaces without touch, where each pixel is also a sensor. Such technology was shown off last year and would require adoption by device makers as well as developers.

At the other end of the spectrum are very small displays. The next generation mobile devices may not be handheld, but perch on your nose, or float on your retina. Masahiro Fujita, president of Sony Systems Technologies Laboratories, outlined a concept for eyeglasses with transparent lenses that double as augmented reality displays, wirelessly linked to your social network and real-time information, providing you live information as you visually scan. It could offer details such as, “That’s the restaurant where Bobby had that great salad, and, it’s got a table free in 10 minutes!”, or, as Jian Ma, chief scientist of the Wuxi Sensing Institute wryly observed, could alert a traveler that “your luggage is no longer with you.”

The next step is the wireless contact lens display, which is already under development. Ultimately, though, devices won’t be something we wear, but something we implant. Brain-computer interfaces that let us control devices using our mind (PDF), or directly stimulate the cortex for artificial vision have been built.

Sound is also important. Fujita of Sony demonstrated a 7.1 channel sound system with “high” front speakers and a “high” mix, enabling sound sources to traverse not only left to right, but also top to bottom. If that’s not enough, NHK has been experimenting with 22.2 channel sound that delivers more surround sound with 24 speakers. Next-generation gaming and entertainment will leverage all of these approaches: Fujita played a cinema-quality video of racing cars, challenging the audience to determine which components were real and which were computer-generated (answer: everything was CGI), pointing out that the vehicle dynamics (bouncing, traction) could be generated interactively in real time.

So what does all this mean for the networks and the backend systems? Please read Part 2 on Sunday for the details.

Joe Weinman leads Communications, Media, and Entertainment Industry Solutions for Hewlett-Packard. The views expressed herein are his own.

  1. Congrats on being invited to IEEE Inaugural Time Machine Symposium. I think you have given some great overview and insight on your “predictions” of how technology will inform our lives and business. Great job

    Share
  2. ” the memristor, now prototyped by HP, may find use in replicating the function of the human brain (sub. req’d)”

    The question is do we have to build on old assumptions and simulate down to the synapses level? We know that the problem is a little more complex than that anyway. How is BAC[1] (Back propagating Activated Calcium) reflected in a memristor? Or all of this [2,3] ? Do we have to do it on that level or are the math principals reflected in higher level abstractions ? Priming a neuron can be found in high level decision making[4] for example. Which is equally important, as a math principal, in voice recognition and understanding, vision and visual illusions (uups) and complex problem solving (teaching a machine math).

    In other words all this HW is pretty useless if the principals of a smart system are not reflected in the SW.
    Like trying to build a touch system without changing the underlying SW assumptions. Which took over a decade to resolve. Or do I need a 3d transistor to make an dumb system faster, or do I use better sensor integration to give the system a touch of self and make it smart? I for one don’t want to be bombarded with useless crap, I want a machine to reflect on me and provide information when and what I need. How can a machine reflect on me without “self”? Do I need a data center for that or is it a rather simple principal reflected in [4]?

    Just some thoughts from the SW side.

    1. http://www.jneurosci.org/content/26/41/10420.full
    2. http://scienceblogs.com/pharyngula/2010/08/ray_kurzweil_does_not_understa.php
    3. http://spectrum.ieee.org/tech-talk/semiconductors/devices/blue-brain-project-leader-angry-about-cat-brain
    4. http://www.wired.com/wiredscience/2011/06/brain-risk-probability/

    Share
  3. Great stuff.Technology keeps getting better and human knowledge will continue to explore opportunities that can make life better. Mobile devices have seen the better of technology, but we are yet to see the very best. 2020 will be awesome and various technology companies will keep innovating to offer efficient and productive devices.

    Share
  4. Thank you for this glimpse into the near-future, presented in a way that a non-geek can understand. By far the most exciting possibilities are presented by technologies that could help the handicapped gain lost or missing abilities.

    I look forward to part 2 and suggest a part 3: predictions from 10 or 20 years ago that haven’t come about – and why.

    Share

Comments have been disabled for this post