How Not to End Up as an Anachronism

Written by Greg Olsen, Founder and CTO of Coghead

Imagine that your friend tells you that he has an idea for a new Palo
Alto, Calif.-area restaurant that he wants you to invest in. His pitch goes like
this: The restaurant is all about self-sufficiency. In addition to
actually serving good food, this restaurant will feature the following:

  • All food served will be organically raised and processed on-site
  • Power will be provided by an on-premise power plant
  • Water will be provided by a well-and-rain capture
  • A self-contained waste management system will eliminate the need for a
    sewer hookup

While there are probably some people in Palo Alto that might actually
think this is a good idea, you being of sound mind respond, “Is this a
joke? Why build basic infrastructure like foodstuff production, water,
sewer, etc. when very efficient, cost-effective, ‘pay-as-you-need-it’
options already exist?”

Imagine that your other friend tells you that she has an idea for a new
software application company she wants you to invest in, and that
this company (in addition to actually creating a useful service-based
application) will:

  • Build and manage redundant data centers with a carefully constructed
    custom hardware and software stack
  • Set up an advanced network peering infrastructure for redundancy and
    improved latency
  • Implement a flexible payment system for customers and channel partners

This could also be misconstrued as a joke. Why would a small
application provider spend so much capital, time and energy building
infrastructure when readily available ‘pay-as-you-need-it’ services
exist, such as compute, storage and network infrastructure services (e.g. Amazon’s
EC2 & S3 services), and payment services from Google, Amazon, etc.?

It is possible that specifics of your friend’s application make use of
available service options infeasible, but it is just as likely that your
friend has simply not yet adapted to a service-based infrastructure
reality. There are always seemingly good reasons to continue doing things the way they were done in the past, and transition
always presents challenges. As ironic as it may be, we continue to see
software applications deployed as a service but which fail to use any
service-based infrastructure themselves. They are two basic reasons for
this situation: Change of existing operational services is hard. So is changing people behavior.

Once an application service is deployed, infrastructure changes are
hard to make. Often commitments of capital cannot be undone without very high
switching costs, such as advanced purchases of compute and storage
capacity. Many architectural choices can have lasting ramifications.

For example, if a provider built their application based on the
assumption of very large SMP servers, a proprietary commercial database
clustering approach and vendor-specific HA infrastructure, they would
find it difficult if not impossible to move to a service-based
infrastructure that’s based on generic hardware/software platforms and
horizontal scaling.

Even for a new application service, it’s often hard to find people who
will embrace disruptive infrastructure options. It is almost inherent
in human nature that once we develop a difficult skill we are reluctant
to give up using it — even after simpler and more efficient alternatives
become obvious. Often, people perceive that their livelihood is tied to
the skill and then fear their own obsolescence in the obsolescence of
the skill. The history of software applications provides a rich set of
examples of this phenomenon. At one time, it was common for software
application providers to create their own hardware, operating systems,
networking infrastructure, languages, compilers, user interface
technology, etc. Eventually, successful application providers took
advantage of standard hardware platforms, operating systems and
languages — to the detriment of the many providers that clung to the prior
model. Likewise, vendors that leveraged the Internet and application
servers gained, while many others continued to cling to proprietary
client/server architectures and were the worse for it.

The recent “software-as-service” phenomenon is a particularly
interesting example of disruptive change. SaaS was first seen as a
disruptive force inside of the IT groups of large application users.
Most companies are starting to understand that they would be better off
with less information technology on their premises and more of it
procured as a service over the Internet. Still, however, many within IT
organizations are reluctant to embrace this form of change. (Personally, every time I see an IBM Blade Server commercial during a major sporting event, I’m wondering what percentage of the viewers know what it is,
what percentage of those could actually affect the purchase of one, and
then what percentage of those should actually be buying data center
servers in an efficient universe).

We are now at point where implementors of SaaS capabilities are being
disrupted by newer SaaS capabilities. Services that are built largely
from other services are a reality, and offer many clear advantages. The
types of services that could be used in the creation of new services
span the spectrum, from base infrastructure services to complementary
high-level application services that can be composed or mashed up.
Example services include: compute and storage services; DB and
message-based queuing services; identity management services; log
analysis and analytic services; monitoring and health management
services, payment processing services; e-commerce services like
storefronts or catalogs; mapping services; advertisement services; in
addition to the more well-known business application services like CRM
and accounting.

The move to SaaS applications built on SaaS is a much more profound
shift than the move from on-premise applications to SaaS applications.
The software industry is beginning to display characteristics that
mimic the supply chains and service layering that are commonplace in
other industries like transportation, financial services, insurance,
food processing, etc. A simple set of categories like applications,
middleware and infrastructure no longer represents the reality of
software products or vendors. Instead of a small number of very large,
vertically integrated vendors, we are seeing an explosion of smaller,
more focused software services and vendors. The reasons for this
transition are simple: It takes less capital and other resources to
create, integrate, assemble and distribute useful software

By leveraging service options like Amazon’s EC2 and S3, a
small company can deploy a complex, highly available and scalable
multi-user software application — without huge upfront investments in
hardware or software infrastructure. Likewise, a very small company can
build a simple, narrowly focused service and can cost-effectively sell
it to a mass audience. Neither of these companies would have been possible only a short time ago.

A new software service economy is rapidly unfolding and is causing
disruption in the software industry. Ironically, some of the first
victims of this new economy may be some pioneers of the software-as-a-service movement. Today, many established SaaS application providers
are applying much more of their precious focus and capital to
infrastructure issues than newer competitors that are aggressively
utilizing service-based infrastructure. The self-contained restaurant
and the build-it-all-ourselves SaaS application vendor both have
seemingly good rationales for their chosen paths, but both will
ultimately end up as anachronisms that are left behind by their competition.


Comments have been disabled for this post