5 Comments

Summary:

UPDATE: After some questions arose over the origins of the Dynamic Complexity technology discussed in this article, we asked Digital Rapids to provide some further clarification, which we later added to this post. While we’ve talked a lot about the 2,200 hours of video that will […]

UPDATE: After some questions arose over the origins of the Dynamic Complexity technology discussed in this article, we asked Digital Rapids to provide some further clarification, which we later added to this post.

While we’ve talked a lot about the 2,200 hours of video that will be available on NBCOlympics.com, it’s worth noting that providing all that video will be an Olympic event in and of itself.

I recently spoke with Brick Eksten, president of Digital Rapids, the company that will be compressing the video for the games, to find out what will go on behind the scenes. He said that Digital Rapids will have 56 servers to process 112 discrete video sources as well as conduct filtering, noise reduction and audio processing.

But first there are the physical limitations of the International Olympic Committee’s broadcasting center, which determine how much rack space, power and cooling each company involved in the process will have access to. And there are strict limitations, so companies have had to design their systems based on those restraints.

To help fit in those constraints, Digital Rapids and Microsoft came up with used a new technology called Dynamic Complexity that scales the amount of encoding power from the CPU based on what is being shown.

Update: From an email received from Digital Rapids:

Digital Rapids did not help develop Dynamic Complexity — we leveraged it from Microsoft’s VC-1 SDK, and worked with Microsoft to craft it into our software architecture (while re-architecting portions of our own software, and working with Microsoft — both the VC-1 and Silverlight teams — on other elements unrelated to the Dynamic Complexity). We came up with a scheme tailored to this project to leverage Dynamic Complexity across our multiple input channels in conjunction with our own software, but Dynamic Complexity existed before we started using it.

For example, with an actual sporting event, you want to make sure all of those action-packed bits get transmitted, so the video doesn’t get compressed as much. For something more sedate, like video of a talking head, you don’t need as many bits, so you can compress the video more. The Dynamic Complexity helps even out the loads being carried by the bandwidth.

There are also three separate streams being generated at any given time: high-res, low-res, and a picture-in-picture stream. This means that video has to be encoded three times. (I take back my complaint about a lack of HD.)

As if that wasn’t enough, once all of the video is compressed, all that data still needs to be beamed across an ocean, through thousands of miles of cable and fiber, through ISPs and then finally — finally! — to viewers’ PCs.

If getting the video to you was an Olympic event, it’d be hurdles. Lots and lots of hurdles.

You’re subscribed! If you like, you can update your settings

  1. You mean users’ Windows Vista PCs. Not all PCs and not Macs.

  2. NBC looks inside the video to put the Olympics online : Delve Networks Blog Monday, August 18, 2008

    [...] Albrecht from NewTeeVee recently posted an article about some of the ways the Olympics have pushed the envelope of online video technology. NBC has made over 2,200 hours of video available on [...]

  3. Hi Chris. I am writing to point out a factual error in this piece. You write “To help fit in those constraints, Digital Rapids and Microsoft came up with a new technology called Dynamic Complexity that scales the amount of encoding power from the CPU based on what is being shown.” This is 50% correct. Microsoft did work with a leading video encoding provider on this technology, but it wasn’t DR, rather it was Inlet Technologies (where I work). Inlet co-developed Dynamic Complexity with Microsoft in late 2006 as part of a joint program lead by Inlet’s CTO Scott Labrozzi and the now-defunct Digital Media Division of Microsoft. It’s great to see that outlets like NewTeeVee are talking about the hard work that Inlet has been doing over the past 5 years to improve video encoding. As an FYI, Inlet’s Spinnaker live encoding product line has shipped with this capability since NAB 2007.

  4. Hi Andy,

    Thanks for your comment. My piece was based on information from an interview Digital Rapids, so I’ll follow up with you to find out more.

  5. No worries – thanks for following up!

Comments have been disabled for this post