More from Facebook on its new networking architecture

1 Comment

Facebook has blazed a lot of trails by open sourcing some of its handiwork and driving the creation of the Open Compute Foundation to drive adoption of a standard servers for scale-out data centers. Now it’s on to networking where it’s talking up the new Data Center Fabric already rolled out in its Altoona, Iowa data center with broader deployment under way.

On this week’s Structure Show, listen as Najam Ahmad, [company]Facebook[/company] director of network engineering patiently (and I mean patiently) explains what this technology means to application builders and why the move to more modular core-and-pod versus clustered systems is key to obliterating old networking bottlenecks. “For once, networking won’t be the gating factor,” he said.

Most simply put, in traditional networks, when the biggest and best top-of-the-line switches maxed out you had to reconfigure the whole cluster. With the fabric layout, bandwidth can be added much more easily and the network is limited — in theory — only by the size of the facility and available power. But he explains it much better than I ever could, so give it a listen.

Facebook’s Look Back video extravaganza could have benefited from networking fabric and for more on how Facebook’s infrastructure people got the plumbing together for 700 million videos in 3 weeks, check out a recent talk by Facebook capacity and performance engineer Goranka Bjedov linked in this story.

But first, first that we discuss the cloud opportunities and challenges [company]IBM[/company] faces and how the [company]Oracle[/company] cloud story just got a lot more interesting with its hiring of Mark Cavage, the brains behind Joyent’s Manta distributed object store.

Najam Ahmad, director of network engineering at Facebook.

Najam Ahmad, director of network engineering at Facebook.

 

SHOW NOTES

Hosts: Barbara Darrow and Derrick Harris

Download This Episode

Subscribe in iTunes

The Structure Show RSS Feed

PREVIOUS EPISODES:

Do you find OSS hard to deploy? Say hey to ZerotoDocker

All about AWS Re:Invent and oh some Hortonworks and Microsoft news too

Ok, we admit it. Google may be serious about cloud after all

EMC cloud really isn’t all about EMC hardware, says EMC product chief

Microsoft’s big freaking servers make cloud wars even more interesting

1 Comment

Rich Hintz

Interesting show. On enterprise selling of cloud by IBM and Oracle: IT customers aren’t in love with enterprise sales from, say, IBM and Oracle, but customers in big, and especially public, entities are governed by procurement policy that you sidestep at your peril. See IBM’s suit against the USG on the CIA award to AWS. IBM and Oracle make it easier to buy products, including cloud in their portfolio, than negotiating a new contract from scratch with, say, Amazon.

The other factor is shadow IT buying IaaS/PaaS on low value purchase orders that can avoid death-march contract negotiations. Departmental credit cards usually are used only until Audit says no, which typically takes about one annual audit cycle.

On the Facebook network, there are several issues here. One is the network topology that FB is using. I may be wrong, but it seems like classic spine-leaf, that’s in wide use in modern (that is, not all that many) data centers. It’s not foreign to big vendors. Cisco, among others, has blurbs on data center network design using spine-leaf.

I wish there had been more about white box hardware (ODM manufacturers from multiple sources?); the software being used, especially between data centers with multiple, possibly unbalanced paths (BGP? Something else?); automation; how networking folks collaborate with developers. Possibly material for another show.

Comments are closed.