Blog

Report from AWS re:Invent: what about elastic expertise?

Nobody could fail to notice the dynamic atmosphere at Amazon Web Services’ (AWS) annual shindig, held in Las Vegas this week. As well as the infrastructure software vendors to be found at any vendor conference, the show floor is a veritable smorgasbord of smaller organisations offering application deployment, security and management solutions. And the place is heaving.

In a room upstairs, the analyst event has now come to a close — that conference within the conference where industry observers get the inside track on AWS strategy, plans and success stories. No longer under NDA are the 500-odd new features and services announced by the company, including support for new database types, an elastic search capability and many, many, many more.

All, say executives, have been driven by customer demand. “Each new service was created in response to customer feedback,” says CTO Werner Vogels in the Thursday morning keynote. Clearly the organisation has come distance from its less client-friendly e-commerce self of a decade ago. And equally clearly, the strategy is working — AWS continues to grow at an astonishing rate compared to its peers.

Distil the announcements and it starts to emerge what customers are asking for. They want bigger and better releases of what is already there; they want cloud-based versions of whatever they are using in-house; they want to migrate sometimes huge quantities of data; they want cloud-based stuff to play nicely with in-house systems and databases.

On this latter point, the h-word — hybrid — is inevitably mentioned. Disclosure: since the earliest days of cloud computing I have never understood the concept of a journey ‘to’ the cloud, like organisations wanted to end up with purely cloud-based setups. Fact is, I’ve never seen any organisation want purely anything; rather, they want the right tool for the job, which inevitably means some integration work.

This means that the near-future is inevitably going to be hybrid, a fact as true now as it was five years ago. An additional factor is that technology resources are still finite, therefore compromises have to be made. A salutary illustration is AWS’ announcement of Snowball, essentially a secure storage mechanism to enable a company to transport terabytes of data, securely, to one of Amazon’s facilities for ingestion.

While the device would not look out of place in Tony Stark’s research facility, the product nonetheless applies a very old solution to a persistent problem, that even the fastest network connections can be insufficient if the data volumes are significantly great. This has never not been true, however much vested interests might have downplayed the point.

These two compromises — that cloud will not always be appropriate for every use case right now, and that resources (particularly bandwidth) may not always be sufficient — underly AWS’ smiling, albeit grudging acceptance of reality. After all, as Amazon’s own people tell us, they like to listen to their customers and that is what their customers are saying. Of course they are.

Thinking more broadly about the breadth of announcements, is Amazon right to be quite so responsive? Beyond a certain point, relentless customer focus can become more of a weakness than a strength, particularly where disruptive innovation is concerned — as a couple of other analysts commented to me, “The customer is not always going to be right.”

A deeper consequence — not necessarily an issue, you understand — emerges from this level of responsiveness. The company has done a great job of delivering a large number new tools to market in a relatively short space of time. Offering everything possible results in other compromises, however. In particular, fragmentation — you already have to be an expert in what’s available before you can decide what to use.

Such expertise is hard to come by, particularly (as a representative from AWS solutions firm Disys pointed out) the software-defined nature of it all required ‘full-stack experts’ — people who understood all layers from (virtual) networking and (virtual) servers to operating systems, containers, databases and so on. It seems ironic, that the one area AWS doesn’t apply the principle of elasticity is in how to scale the required skills.

The second is deployability. Many of the tools in the AWS toolbox are presented with minimum fluff, nowt wrong with that I hear hardened engineers say. But for the less experienced, the ability to create a complete stack at the push of a button rather than having to build it from scratch would be altogether preferable. Indeed, even the hardiest engineers would rather have automation built in, than have to waste time (say) writing JSON configuration files by hand. Really.

While the AWS management team seems content with the current status quo, I’d wager that this is a consequence of where the company is right now. Sooner or later, if left unchecked, complexity and fragmentation will win; in response, as an tech vendor matures, it usually starts to put in place higher-order facilities. Signs of this are already emerging from AWS, such as the security checking capabilities that constituted one of the many announcements.

I envisage solution patterns, architecture templates and best practices being first documented by AWS, then facilitated via user interface tweaks, then potentially automated, simplifying how cloud-based solutions are built and deployed. As Vogels himself noted in the keynote, citing Gall’s law, complexity not something any customer wants to have to deal with.

By adopting such capabilities AWS will be emulating an oft-repeated pattern, reflecting how new technologies mature and, ultimately, become subsumed into the platform. At that point, no doubt an AWS executive will stand up and say, “We like to listen to our customers, and this is what they are telling us.” And he or she would be right.