3D as a form of entertainment has gone from novelty to semi-forgotten relic. Now, thanks to a recent spate of 3D movies, it’s back. It’s likely to stay around this time, because the business base is strong and new tech offers better options for capturing, distributing and displaying 3D images.
Old 3D production methods were hampered by guesstimates of depth perception and focus, but digital processes have vastly improved them. This has led to more 3D-capable TVs, 3D-enabled theaters, and better software tools. But not all 3D content is the same or made in the same way. 3ality Digital, for example, focuses on live-action recording, while others (RealD) mainly output 3D animation. Different flavors of cameras are used. Even the displays from which 3D images emanate can be different — a big IMAX using 3D projectors doesn’t have the depth of Zebra Imaging’s holograms, which come in handheld slabs. It’s all a bit overwhelming, but the key is that most can now be experienced by anyone, and what is not offers a tantalizing idea of what’s to come, including a move beyond movies and into the real meatspace of business and war.
Let’s take a look at three rising companies working in 3D:
3ality works on several ends of the 3D process. It helps TV and movie productions upgrade equipment from 2D to 3D-capable and uses pixel-correcting software to improve depth perception.
Depth perception is one of the industry’s big challenges. To create 3D, two cameras must record an image at the same time, side by side, to replicate the two-eyed stereoscopic vision of people. But most 3D video contains imperfections that are hard to match up. This led to fuzziness in old 3D movies and gave people headaches. To improve video coordination, 3ality uses cameras with smart motors that calibrate key optical features (like aperture and focal length) and software that cleans up images. A trained “stereographer” can remotely operate these features to determine bad 3D alignments.
When 3ality post-produced U23D, the manual operation took up most of the schedule and the budget. That forced the company to improve soft calibration by natively embedding it in its systems. As a result, video engineers can now concentrate on effects editing as opposed to managing geometry, and the system is now easy enough that anyone can be 3D-certified. 3ality says upgrading adds about 10 percent to 25 percent to the cost of an average 2D TV broadcast. For an hour-long show, it measures out to about $750,000.
3ality’s main competition is Pace’s Fusion 3D camera. With input from director James Cameron, the cam is modded for different needs, like over-the-shoulder mobile filming. Pace also provides technical support for the equipment, but isn’t considered a full-service company.
Texas Instruments’ digital light processing (DLP) chips make 3D in HDTVs possible by separating video feeds into two different images.
Images are cut into a checkerboard “interlacing pattern,” where a set of alternating squares are assigned to the left eye, the other to the right, and are sent to the respective lenses of LCD shutter glasses. As a video plays, one eye sees a grid with a different depth than the other, and natural optics put the image together. But the eyes need help, which is where the millions of tiny mirrors on the surface of each chip gain their importance. The speed with which they switch between frames must be fast enough to replicate the speed that makes an object moving in real space have 3D depth. At 60 Hz per eye, DLP TV chips come close to eliminating fuzzy images. Still, they’re not quite there yet.
Currently, each mirror corresponds to at least one pixel of resolution on a screen, and they’ll likely get smaller. But look for faster “switching” to further improve video resolution. Studios are using DLP 3D to upgrade old movies, and 3D theaters also include it. While DLP dominates 3D chip processing, GPU giant Nvidia recently launched an IR emitter-based driver that enhances games in PCs and is compatible with TVs with DLP chips.
A company is bringing full-color, glasses-free holograms to life, and R2D2‘s got nothing to do with it. Instead, Zebra Imaging creates true 3D objects through software and a fast, powerful printing machine.
ZI’s software places a virtual camera inside every point of a digital model’s holographic vertical plane, in order to record its surface volume. Since there are about a million of these points, the amount of data is huge. Zebra then prints the data (containing distinct POVs) to a plastic polymer, creating a sheet full of 1-mm holo tile encryptions. Each tile is basically an image data conductor sensitive to light. When light is shined, its energy scatters around and all image angles are revealed at once, popping up a free-standing 3D model. When I reached out to “touch” the 3D buildings in a demo, I felt nothing but air, revealing a true optical illusion.
This tech is important for a few reasons. Its printing scalability (and physical flexibility) offers the best portable 3D-hologram imaging yet. The wide angles and true depth volume of the renders help people collaborate easier in real space, as opposed to a 2D screen (and might help avoid visual fatigue). And the 3D slabs are cheaper and faster to make than models made of other materials, like clay. This is why the U.S. Army likes them — maps of constantly changing battlefields can be printed on the fly.
Unfortunately, the prints still cost thousands of dollars, so it’ll be a few years before they’re widely available.
Follow Jose Fermoso on Twitter at twitter.com/fermoso