UPDATED: Online Video an Olympic Feat for NBC

UPDATE: After some questions arose over the origins of the Dynamic Complexity technology discussed in this article, we asked Digital Rapids to provide some further clarification, which we later added to this post.

While we’ve talked a lot about the 2,200 hours of video that will be available on NBCOlympics.com, it’s worth noting that providing all that video will be an Olympic event in and of itself.

I recently spoke with Brick Eksten, president of Digital Rapids, the company that will be compressing the video for the games, to find out what will go on behind the scenes. He said that Digital Rapids will have 56 servers to process 112 discrete video sources as well as conduct filtering, noise reduction and audio processing.

But first there are the physical limitations of the International Olympic Committee’s broadcasting center, which determine how much rack space, power and cooling each company involved in the process will have access to. And there are strict limitations, so companies have had to design their systems based on those restraints.

To help fit in those constraints, Digital Rapids and Microsoft came up with used a new technology called Dynamic Complexity that scales the amount of encoding power from the CPU based on what is being shown.

Update: From an email received from Digital Rapids:

Digital Rapids did not help develop Dynamic Complexity — we leveraged it from Microsoft’s VC-1 SDK, and worked with Microsoft to craft it into our software architecture (while re-architecting portions of our own software, and working with Microsoft — both the VC-1 and Silverlight teams — on other elements unrelated to the Dynamic Complexity). We came up with a scheme tailored to this project to leverage Dynamic Complexity across our multiple input channels in conjunction with our own software, but Dynamic Complexity existed before we started using it.

For example, with an actual sporting event, you want to make sure all of those action-packed bits get transmitted, so the video doesn’t get compressed as much. For something more sedate, like video of a talking head, you don’t need as many bits, so you can compress the video more. The Dynamic Complexity helps even out the loads being carried by the bandwidth.

There are also three separate streams being generated at any given time: high-res, low-res, and a picture-in-picture stream. This means that video has to be encoded three times. (I take back my complaint about a lack of HD.)

As if that wasn’t enough, once all of the video is compressed, all that data still needs to be beamed across an ocean, through thousands of miles of cable and fiber, through ISPs and then finally — finally! — to viewers’ PCs.

If getting the video to you was an Olympic event, it’d be hurdles. Lots and lots of hurdles.

loading

Comments have been disabled for this post