Distributed computing is nothing new, but like the Big Bang, what was once contained as a singular node of computing has exploded into an ever-expanding number of real and virtual machines traveling farther and farther from any central origin. It’s not a perfect metaphor. There was never just one mainframe or one data center, but the thinking is similar. The number of nodes is increasing and their placement on the network is moving further and further out.
Which is why this year at Structure we’re pushing further and further into use cases and an understanding of how one builds computing that no one organization has control over. Can computing embrace entropy while still delivering reliable results? The event, held in San Francisco on June 18 and 19, attempts to discover how big names in webscale computing are thinking about the edge and designing applications that can span both the cloud and individual sensors. But while the Googles and Facebooks might be the leading edge, how far can companies like HP or VMware drag enterprise clients into the future, and what’s keeping them back?
1. An application that lives in every time zone
Over the years Google has driven the technology behind distributed computing with technologies such as Map Reduce and Spanner. It is clearly thinking about how to build applications that aren’t isolated in one data center or even one time zone. This type of distributed thinking is behind its latest networking investments and is why Urs Hölzle, SVP Technical Infrastructure and Google Fellow, is speaking at the Structure. But we’re also bringing in others who understand these problems including Facebook’s Jay Parikh and Microsoft’s Scott Guthrie.
2. Building trust on untrusted hardware
Securing more and more devices isn’t just hard, it’s becoming impossible as we span different clouds, data centers and networks. With security flaws like Heartbleed or weak physical endpoints such as the point of sales terminals that led to the Target data breach, we’re going to need a new model for security. Matthew Prince, the CEO of CloudFlare has a some ideas on how we implement security in this brave new world.
3. Going to light speed
It’s not enough to say that computing will need to occur over a greater number of devices. We also have to improve the speed that information travels over the myriad networks it will have to traverse to help with real-time data processing. We’ll also need new forms of memory capable of holding more data close to the compute and possibly containing its own processing. Speakers such as Andreas Bechtolsheim of Arista Networks, Dianne Bryant of Intel and Vinod Khosla can help us understand the hardware problems in these three areas and potential solutions in the works.
4. Leave no computer or company behind
This concept of an explosion of data and endpoints is nothing new to enterprise, which has dealt with this since mainframes evolved to the personal computer and now to every employee bringing his or her own device. But each evolution has led to more complexity and now the pressure is on to rethink the overall architecture to focus on agility. We’re bringing in Jamie Miller, the SVP and CIO of GE; Jeffery Padgett, Senior Director, Infrastructure Architecture for Gap; Don Whittington, VP and CIO of Florida Crystals; and Stephan Felisan, VP Engineering & Operations at Edmunds.com to explore how old-line companies will make this next evolution.
5. Abstract everything you hold dear
Part of the promise of this new style of computing architecture is that more people without deep technical skills can use technology to improve their business. But to make this possible, you have to make hard tech easy. Abstraction is how most companies are choosing to do this. We’ll have the granddaddy of abstraction, Amazon’s CTO Werner Vogels, onstage to discuss how far the world’s most popular public cloud can take that concept. Mike Curtis, the VP of Engineering at Airbnb, will also join in, discussing the practical limitations of such a strategy.