13 Comments

Summary:

It’s not often the software world goes through a revolutionary change. But the advent of the cloud will force software developers to reevaluate – and discard – many of their most basic assumptions.

garbage trash
photo: Mike Flippo/Shutterstock

The paradigm hasn’t changed since the advent of software: Applications run, and platforms are what they run on. But the underlying principles of application design and deployment do change every now and then – sometimes drastically, thanks to quantum-leap developments in infrastructure.

For instance, application design principles changed dramatically when the PC, x86 architecture, and client/server paradigm were born in the ’80s. And  it happened again with the advent of the web and open-source technology in the mid ’90s. Whenever such abrupt changes arise, application developers are forced to rethink how they build and deploy their software.

Today, we’re seeing a huge leap in infrastructure capability, this time pioneered by Amazon Web Services. It’s clear that to take full advantage of the new cloud infrastructure, applications that run successfully on AWS must be inherently different than applications that were built to run successfully on a corporate server – even a virtualized one. But there are a number of other particular ways in which today’s (and tomorrow’s) cloud applications will need to be designed differently than in the past. Here are the most crucial ones, and how the ways of the old world have been changed in the new one :

Scaling 

In the old world, scaling was accomplished by scaling up – to accommodate more users or data, you simply bought a bigger server.

In the new world, scaling is typically done by scaling out. You don’t add a bigger machine, you add multiple machines of the same sort. In the cloud world, those machines are virtual machines, and their instantiations in the cloud are instances.

Resilience 

Before, software was seen as unreliable, and resilience was built into the hardware layer.

Today, the underlying infrastructure – the hardware – is seen as the weak link, and it is up to applications to accommodate for this. There is no guarantee that a virtual machine instance will always function. It can disappear at any moment and the application must be prepared for this.

By way of example, Netflix, arguably the most advanced user of the cloud today, has gone the farthest in adopting this new paradigm. They have a process called ChaosMonkey that randomly kills virtual machine instances from underneath the application workloads. Why on earth do they do this on purpose? Because they are ensuring uptime and resilience: By exposing their applications to random loss of instances, they force application developers to build more resilient apps. Brilliant.

Bursting

In the old world – think accounting and payroll applications – the application workload was reasonably stable and predictable. It was known how many users a system had, and how many records they were likely to process at any given moment.

In the new world, we see variable and unpredictable workloads. Today’s software systems have to reach farther out in the world, to consumers and devices that demand services at unpredictable moments and unpredictable loads. To accommodate such unforeseen fluctuations in individual application workloads required a new software architecture. We now have it in the cloud, but clearly it is still in its infancy.

Software variety

In the past we didn’t have much software variety. Each application was written in one language and used one database. Companies standardized on a single, or at least very few operating systems. The software stack was boringly simple and uniform (at least now in retrospect).

In the new world of cloud, the opposite is happening. Within a single application, many different languages can be used, many different libraries and toolkits can be employed, and many different database products can be used. And because in a cloud you can create and spin up your own image, tailored to your and your application’s specific needs, applications within one company must be able to operate under a spectrum of configurations.

From VM to cloud 

Even between the relatively new technology of hypervisors and the modern cloud thinking, there are differences. VMware, the pioneer and leader in virtualization, built its hypervisors to essentially behave the way physical machines did before.

But in the cloud world, the virtual machine is not a representation of a physical server; it’s a representation of units of compute. (Steve Bradshaw wrote about this topic in depth.)

User patience

In the old world, users were taught to be patient. The system may have needed a long time to respond to simple retrieval or update requests, and new features were added slowly to the application (if at all).

In the new cloud world, users have no patience. They hardly tolerate latency or wait times, and they look for improvements in the service every week, if not every day. Evidence of this can be found in self-service IT. Rather than file a ticket with IT and wait for a response several days later, users of IT can self-provision the resources they need.

Do these observations rhyme with what you are experiencing and taking action on in your organization? I look forward to comments and debate on this topic.

Marten Mickos is the CEO of Eucalyptus Systems. He previously served as CEO of MySQL AB, which was acquired by Sun Microsystems. He is a member of the board of directors of Nokia.

Have an idea for a post you’d like to contribute to GigaOm? Click here for our guidelines and contact info.

Photo courtesy of Mike Flippo/Shutterstock.com.

You’re subscribed! If you like, you can update your settings

  1. Dan Benjamin Sunday, May 19, 2013

    Experiencing and feeling the exact same thing…

  2. Drilling down the Scalability topic, it is vital now to have your application as stateless as possible. Handling state at cloud runtime is kind of complex and in most cases avoidable issue.

  3. Why does every tech blogger feel compelled to use the word “cloud” in the title? This isn’t an article about cloud computing as it is about distributed computing. Do you think Google or Facebook uses the cloud? This story focuses on server side developers. Client side developers have experienced their own paradigm shift but that is from desktop to mobile and not from stand alone/scale up to distributed/scale out. What really drove this server side paradigm shift was the rise in popularity of the B2C freemium business model which required web scale in order to succeed. A traditional 3 tier architecture talking to a single relational database just couldn’t keep up even on a mainframe. Clusters of commodity hardware became much more affordable and allowed for a wider proliferation of entrepreneurs to enter this market. Weither those clusters are owned, leased or (in the case of cloud) rented may have contributed to this trend (especially for pivoting, lean start ups) but was not the main driver. Ten years ago, neither Microsoft, IBM, nor Oracle was influencing developers towards distributed computing. It was the open source community who took the lead here. One major contributer to distributed computing for analytics is documented in http://www.dynamicalsoftware.com/analytics/oss where the Apache Foundation made a much more significant contribution than Amazon.

    1. Welcome to the conversation. Google and Facebook don’t use the cloud, they are parts of “the cloud.” Cloud computing is the new form of IT that powers cloud systems. Distributed computing is one element of cloud computing. There are many other components related to operational discipline, software architecture, service orientation, API centricity, availability vs. resiliency models, and so on.

      I recognize that many people are a bit new to this conversation, but there has been a LOT of nuanced discussion about “cloud” and “cloud computing” for a very long time now (circa late 2006, but really using the same terms since summer 2008).

      Three recommended places to start are:

      http://www.cloudscaling.com/blog/cloud-computing/the-cloud-is-not-outsourcing/
      http://www.cloudscaling.com/blog/cloud-computing/the-evolution-of-it-towards-cloud-computing-vmworld/
      http://www.cloudscaling.com/blog/cloud-computing/cloud-innovators-netflix-strategy-reflects-google-philosophy/

      Regards,

      –Randy

  4. martenmickos Sunday, May 19, 2013

    Thanks, Gengstrand, for your comment. I agree that it is about distributed computing and that the open source world in many aspects drove the development of it. We can be thankful to everything that the LAMP stack stands for.

    In my view, “cloud” means (or should mean) 2 closely related things: (i) the notion of using compute services that are provided to you by someone else, and (ii) the new software architecture that is needed to deliver such functionality. It was the latter meaning that I had in mind for this posting.

    Make sense?

    Marten

    1. I think Gengstrand is stating that the fact that compute services are provided by someone else is not the driving factor for new software architecture (points that you have mentioned in your article) – but it is the nature of distributed computing that are driving the aspects of software architecture that you have mentioned. IMHO – I agree.

      1. The level resilience would vary depending on whether the compute service provider is external or internal and the costs associated with it. That factor alone contributes to re-thinking of software architecture in a cloud environment versus distributed computing. You can have layers/tiers of distributed computing with physical/virtual hybrids. But in a cloud if a VM instance disappears because of low usage and reinstantiated during high load, the software must be resilient to maintain the uptime and scale up/down appropriately

  5. I think Armro5 (www.armr5.com) follow this article to redefine mobile security.

  6. Whether it is the cloud or parallel processing (think CUDA), it is good to rethink software development (and testing) every now and then. One thing that I am still missing in this discussion is the impact on software deployment. Updating one instance of an application on an IBM zOS mainframe is quite different from updating an iOS application on 60.000.000 iPhones.

  7. Val Bercovici Monday, May 20, 2013

    Great high-level perspective on developers in the Cloud Marten! IMO, the headline is a little off. I would change it instead to:

    “For developers, the cloud is the best thing to ever happen for making software”

    The convenience, flexibility / elasticity and of course up-front costs of the Cloud are attracting developers in relative & absolute numbers I have never seen in over 30+ years of IT experience.

    Rather than *having* to rethink their architectures, most developers are happily *willing* to “bend” their architectures (as Randy Bias above likes to say) around the “limitations” of IaaS (from legacy physical infrastructure platform perspective) because the stated benefits simply outweigh them.

    Steve Bradshaw’s blog eloquently covers the shift in mentality from legacy notions of physical server packages (fixed CPU, RAM, Storage, Network, IO, etc…) to elastic units of compute, network, storage, etc…

    It’s an exciting time in the industry to see IT reinventing itself once again into a brand-new 10+ year era!

  8. Val Bercovici Monday, May 20, 2013

    Great high-level perspective on developers in the Cloud Marten! IMO, the headline is a little off. I would change it instead to:

    “For developers, the cloud is the best thing to ever happen for making software”

    The convenience, flexibility / elasticity and of course up-front costs of the Cloud are attracting developers in relative & absolute numbers I have never seen in over 30+ years of IT experience.

    Rather than *having* to rethink their architectures, most developers are happily *willing* to “bend” their architectures (as Randy Bias above likes to say) around the “limitations” of IaaS (from legacy physical infrastructure platform perspective) because the stated benefits simply outweigh them.

    Steve Bradshaw’s blog does a great job of covering this non-intuitive shift in mentality. It’s an exciting time to participate in one of the rare new 10+year era’s of IT!

  9. Alexandre de Pellegrin Tuesday, May 21, 2013

    From my point of view, the first rule to respect with distributed architecture is : do not distribute. And this is the first and main difficulty the cloud has to deal with. Just because distributed architecture over the cloud breaks transactions. So, the problem is to keep data integrity. the second problem is latency. Even if you wrote users don’t want to wait anymore, distributed architecture implies impediments between tiers. These impediments are bigger over the cloud than inside internal machine; just because exchanges over internet are slow. The third problem is security because we have to manager security on each tiers exposed on the web. Protocols are not mature (I think about OAuth2 and its end user experience).
    My message isn’t against cloud. It’s just here to moderate yours and to remind that cloud is not paradise and cloud is hard.

  10. Kiayada Bradford Friday, May 31, 2013

    Software development is a complex process that requires technical expertise, thus a developer should be highly skilled. This is to ensure that the software provider would be able to deliver only the best solutions to the organization.

Comments have been disabled for this post