Analyst Report: Cloud and data centers join forces for a new IT platform for internet applications and businesses

1 Summary

Business and IT are no longer separate silos. Your application is your business, and software development is the glue that holds it together. But how can you develop your business and the applications that support it at the same time? How far can you push one before the other breaks? The monolithic application is dead, and it’s time we started acting like it. Modern applications provide and consume different services with different needs, and each of those services passes through a number of hands on the journey between conception and production. As a result, there is no single “right” platform for application development and production. For some applications, the quick provisioning of the public cloud may provide an ideal prototyping or dev/test environment, though actual production may happen on in-house servers. For others, competitive or privacy concerns may dictate exactly the opposite strategy. To extract maximum value from their infrastructure, developers must move beyond their current binary thinking and evaluate platform choice based on the various phases of application development and the many roles that touch an application during those phases. An increasingly large swath of businesses are realizing that the cloud-plus-data-centers model can provide the best of both worlds, and integrating the public virtual cloud with the physical data center is the best way to cost effectively scale, secure, and serve modern production workloads.

Key findings include:

  • Public cloud infrastructure is excellent for many use cases, but it exposes only the most common 70 percent or so of the functionality that a business may need.
  • The cost, control, and security benefits of a data center can be significant differentiators for business applications.
  • Cloud-plus-data-center provides the best option for most businesses, maximizing cost, functionality, and agility benefits.

2 The right tool for the job

When it comes to IT infrastructure, business stakeholders are needy, and rarely do their needs overlap. Developers and system administrators have long been hounding operations for tools and services that provide better agility and productivity. They look to the public cloud offerings and see a huge draw in pay-as-you-go services, which make high availability and disaster recovery as simple as checking a box on a web form.

On the other hand, finance looks at the bottom line and quickly realizes that the sunk costs of existing data centers and the pay-as-you-go model of data center colocation are generally lower-cost options for many developments. In addition, security and compliance teams are looking for visibility and control, which may also lead them to lean toward a local data center, but at what cost?

There is no magic bullet that can meet all parties’ needs in aggregate. IT can address each set of concerns individually, but like most technology decisions, they will have to mix a cocktail from several individual approaches and products, each suited uniquely for the biggest concerns of the individual stakeholder and each involving some compromise. Options include Infrastructure as a Service (IaaS) or public cloud and Platform as a Service (PaaS) solutions that allow IT to deploy to both public and private clouds, but one of the most frequently overlooked tools is also the most powerful, flexible, and possibly lowest-cost option – the company’s own infrastructure in a high-end data center.

With public cloud offerings like Amazon Web Services (AWS) approaching their first double-digit birthdays, we now have a whole generation of systems architects and operations folks who may have never racked and stacked servers or configured an F5 switch. For these individuals, the concept of owning your own hardware may be foreign at best and seem downright archaic at worst. These technologists may think that the mass-transit bus of the public cloud mainly gets them where they need to go, but many have never been behind the wheel of their own data center because they’ve been taught that it is too expensive, too wasteful, or completely obsolete.

Public cloud is definitely one of the many tools a business needs to leverage, but it needs to be understood for what it is – a genericized approach to IT capacity that is built for the 70-percent most common use case. The configuration options exposed to you are limited to either the easiest to implement or the most commonly used. With public cloud, you’re paying for convenience, but what you’re getting is often commodity technology under the hood. That commodity approach could be commoditizing your business, and that bus you’re riding could be both slowing you down and costing you more.

3 Public cloud’s silver lining

Some apps and architectures are absolute no-brainers for the public cloud. If you’re hosting a simple, generic three-tier web app with manic usage patterns, there’s nothing like the auto scaling of the public cloud. If you need to store and serve up static content, a highly durable pay-as-you-go object store like S3 has no equal. If you’re a startup with limited developer-heavy IT resources, IaaS allows you to focus most of your precious time on building your product rather than managing your IT infrastructure.

Public cloud is also a great augment for your on-premises infrastructure, offering many novel approaches to burst capacity outside of your own data center. With this complementary approach, you can spin up disposable dev, test, and proof-of-concept environments on demand in the public cloud and then deploy to your own low-cost, high-control infrastructure. You can also move some of your static website assets out to a public cloud object store while keeping the dynamic content hosted in-house – both relieving load off of your backend web servers and speeding up content delivery for your users.

Providing high availability and disaster recovery are two other areas where the public cloud can shine. If you want to start storing backups on the other side of the country or planet, you could literally be doing that as soon as five minutes from now. With just a few more weeks of time, you could implement architectures like low-capacity standbys or pilot-light architectures – public cloud-side redundancy to stand your entire application back up in the public cloud should it go down in your own hosted infrastructure.

Screen Shot 2014-08-04 at 10.08.42 AM

Pilot light: dormant public cloud infrastructure is started and scaled to production capacity when a disaster recovery event occurs in your data center

 

4 When the cloud goes dark

Most businesses considering cloud adoption take an all-too-simplistic approach to the risk factors, citing concerns like security, vendor lock-in, and interoperability as their top inhibitors. Although those are all valid concerns, what a growing number are realizing is that, in addition to increased scale and increased convenience, the public cloud often also results in an increased bill.

In its 2013 Future of Cloud Computing survey, North Bridge Growth Equity Venture Partners reported that 46 percent of respondents describe cloud-driven IT as “more complex” since the commodity cloud offerings often need to be shored up with highly customized in-house solutions in order to provide the required performance, functionality, and possibly security. Reflecting perhaps some firsthand experience, a surprising 28 percent of respondents to that survey responded that cost was perceived as a significant barrier – a full 50-percent increase over the previous year’s figure. In response to the survey, Steven Martin, GM of Windows Azure Marketing and Operations, echoes what many others also believe: “Over the next five years, hybrid cloud will become the norm.”

Those with recent experience deploying to a public cloud will certainly find no surprises in those numbers or statements. In fact, it’s actually fairly common for companies adopting public cloud to see their cloud fees be higher than – and sometimes double or triple – the cost of their data center-based solution. The most common reason for this is that cloud instances that are easy to deploy are just as easy to forget and leave running, becoming “zombie virtual machines.” Just as it is easy to use, the public cloud is also easy to abuse. In an extreme example, a client hired a consultancy to help them migrate to AWS. The consultants spun up hundreds of servers, failed to document which were production, development, and test, and left them all on! The cleanup effort spanned weeks as servers had to be powered down one by one, and everyone held their breath to see if production systems went down.

Even when a business is good about cleanup, the raw economics of the cloud can still result in unpleasant surprises. As you might expect, cloud companies stay in business by making a profit on the services they provide. These profit margins can cause some managed services to cost more than doing it yourself would. For example, customers coming back to a data center after an unsuccessful migration to the cloud often cite bandwidth costs as one of the leading factors.

In addition to potential sticker shock, the cloud can also provide somewhat limited functionality. If a system falls under a 70-percent coverage umbrella, businesses may never realize that the cloud offers less functionality and control than their own hardware. The danger here is most pronounced for businesses that start small but then have applications that grow quickly. “Many apps are born in the cloud but outgrow the environment,” reports one data-center manager interviewed for this report. “As an application matures and becomes core to your business, you may find that dedicated infrastructure becomes critical to your cloud system.” For example, your application could require advanced controls found in domain name system security extensions (DNSSEC), but your cloud provider might not be ready to support that feature.

A final hurdle that businesses may encounter when migrating to the public cloud is underestimating the effort involved in setup and maintenance of cloud-side infrastructure. AWS, for example, started out with just about five services that were drop-dead simple to configure. That service count in recent years has ballooned to more than 35 services, with some of them such as S3 having hundreds of configuration options and facets.

Beyond cloud knowledge, a team also will likely need to have some in-depth knowledge of the underlying service that they will inevitably need to tune and maintain. Using databases as an example, a managed cloud database often makes backups and high availability as simple as a checkbox. But when a customer inevitably needs to tune that database, the team will still require an understanding of the hundreds of parameters they can control and what each is good for. Both architecting and operating systems in the cloud appear at first glance to be far simpler than the reality of the tasks.

5 The data center as a differentiator

It seems that a growing number of today’s application architects and IT operations folks have been raised on public cloud and have a biased and incorrect view of the role of a data center. Most can rattle off the perceived evils of the first generation of data centers on command – sunk costs, limited scale, and physical hardware maintenance.

What savvy architects and operations leaders know, however, is that today’s data center can be the secret weapon that makes your product or company shine. And by using data center colocation, you can leverage a pay-as-you-go model for these large fixed data center assets, effectively gaining access to a slice of data center infrastructure that you could not afford to build yourself or have the expertise to operate. It’s for these reasons that data centers have been steadily and silently growing like gangbusters in recent years.

“Our DreamCompute cloud — powered by OpenStack and housed in RagingWire’s Ashburn data center — can boot an instance in less than a minute, delivering real utility and flexibility for developers to build and test their next big thing,” says Simon Anderson, CEO of DreamHost.

Although they keep improving at a rapid pace, by their very definition public cloud companies will always struggle to deliver the network speeds, systems availability, and security of businesses running applications in their own data centers. Companies with particular products or stacks that don’t fit into the cloud’s 70-percent coverage area will often need to move part of that stack into a customized hosted solution that gives them full control over functionality and better control over costs. Many large enterprises like Google and Facebook (who both custom build and obviously host their own hardware) live or die based on this control. Without the speed and customizability of top-of-the-line hardware, their services would crumble under demand. It’s also important to note that the benefits of cloud can apply just as well in your own private cloud. “Maybe one of the biggest misconceptions about data centers is that they are static environments with sunk costs,” says the data-center manager quote earlier. “The new generation of data centers are living breathing things. They can adjust themselves to the needs of the applications, and they are incubators for business innovation.”

Another not-so-well-known fact of data centers is that, because they use specialized power delivery systems and top-of-the-line hardware in their networks, high-end data centers frequently have better uptime histories than public cloud providers. For example, although the highly durable S3 service from AWS offers an insane “11 nines” durability, it comes with a not-so-great “4 nines” of availability – equating to just under an hour of expected downtime per year.

6 Where a data center shines: cost

I often hear from clients that the litmus test for investigating a move to your own data center is when your monthly cloud services bill for a particular application approaches $5,000 to $7,000. Some companies who have moved back and forth between public cloud and colocation have reported savings of 75 to 85 percent, which is in favor of the colocation. In all fairness, many of the link bait case studies out there are greatly exaggerating savings either by not factoring in their own misuse of the cloud or by focusing on a single piece of their stack where cloud costs are the highest. The recent price cuts at all the major cloud providers also were not factored into many of those studies, but even when you factor in all of the above, you’ll often find that the long-term economics of your solution favor including a data center footprint in your infrastructure portfolio. For a large enough application, they likely always will.

Occasionally cloud providers provide insight into their margins. Consider, for example, the latest 65-percent price slash on S3 object storage. If a company can drop prices on a service that dramatically, it begs two questions: (1) “How could they charge me three times that much last week?” and (2) “Where is the floor?” Although these price surprises are almost always pleasant, it can be difficult to build a business model around a technology service with such extreme price fluctuations.

A great many cloud customers find that more than 70 percent of their bill is attributable to the cost of running a single service – their virtualized server farms or “instances.” Virtualizing your own hardware in your own data center gives you the same service (the ability to deal with variable workloads) with increased control of performance variance and many times for lower costs. This is especially true if, like most apps, your usage curve falls within well-known ranges. And even applications with extremely bursty traffic patterns can still leverage public cloud for the spikes.

When comparing public versus private cloud costs, businesses also need to be especially cognizant of the cost of their human resources. Every honest comparison out there can and will factor in the HR cost and point out that the biggest savings can be achieved when an IT team is already well-rounded. If, like most devops teams, your team is heavy on the dev and light on the ops, you may find yourself needing to hire expensive outside consultants or open full-time positions for the specialists. With current cloud consultant rates in the $150-to-$250-per-hour range, it doesn’t take too many billable hours of consulting time to completely devour a $50,000-to-$100,000-per-year savings.

7 Where a data center shines: control

The black-box nature of the public cloud, especially at the networking layer, is often a round hole into which your square application has trouble fitting entirely. In many ways, the cloud customer is at the mercy of the provider – only getting access to the levers that they expose and only getting new features when the provider decides to release them. For example, many of the providers use economical x86 servers for network functions like natting and load balancing, pushing the networking tasks up to the slower software layer. Although these generic approaches certainly get the basic function done, they vastly underperform the high-end customized layer 2 or layer 3 hardware that IT would use if given the choice.

Although the public cloud excels at the ability to quickly deploy and test out new software products, it largely exempts businesses from leveraging the equally great advances of new hardware products. Even at the most basic level, a common complaint that many cloud customers express is the inability to slice and dice the individual CPU, RAM, NIC, and other virtualized hardware into instances that make sense for their applications. Most of the cloud providers still package up these virtualized hardware assets into a couple dozen instance types from which you can choose. With your own virtualization in your own data center, you have the ability to create any number of instance types with full control of mixing and matching virtualized hardware.

Public cloud providers have done an admirable job in recent years of both providing more control and giving better service-level agreements around performance. However, the “noisy neighbor” problem (other customers on the same hardware node occasionally affecting the performance of your application) is still both real and difficult to diagnose. For example, one of the reasons that Google Cloud Platform delayed releasing to the public was to fix issues that they ran into with beta customers sometimes adversely and unexpectedly affecting the performance of their core business functions.

This genericized approach to services even forces some companies like Netflix (maybe the best-known “all in on public cloud” company) to create and manage some aspects of their stacks. In the case of Netflix, this meant creating their own content delivery network (CDN) rather than using their cloud provider’s. Netflix found that their cloud provider did not give them the control they needed for this critical element of their IT infrastructure.

With public cloud, businesses often trade convenience for control or “quick and easy” versus “highly optimized and/or speedy.” As an application grows and evolves, businesses may well find themselves running into more and more of the limitations of their cloud provider and could find themselves at a disadvantage to competitors who leverage the best solutions available in whole or partly at their own data center.

8 Where a data center shines: security

Every major cloud provider today has achieved virtually every major certification a customer might want, and users of those cloud services have deployed HIPAA, PCI, ITAR, and ISO 270001 certifications for their own cloud-hosted stacks. In many ways, public cloud is even more secure than an in-house data center, offering point-and-click firewalls, fairly rich authentication and authorization tools, and blocking traffic at the hypervisor layer (preventing neighbors from sniffing your packets). However, there are many ways in which your own data center is and always will be more secure.

Lack of detailed service-level logging, lack of fine-grained control over the networking on edge devices, lack of full ownership of your data, and the potential catastrophic effects of misconfiguring “easy to change” provided security tools all immediately rise to the top of that list.

When it comes down to it, the cloud is also still very much a black box when viewed from the security perspective. Provider services and tools have come a long way, but compared to the best data center security, they have a long way left to go. When the cloud provider tells you “we do x, y, and z” to ensure your security, customers have to hand over trust entirely, as the validation tools they would like to use (for example, audit logs) are largely absent.

Any organization of decent size has probably already built security systems and processes, and retrofitting these to work inside of a cloud provider’s more limited set of controls can often be difficult and complex and many times results in softening of security requirements.

If the physical security of your systems is important, the new generation of data centers offers exceptional services. The best data centers deploy three-factor identification – something you have, something you know, and something you are – across multiple challenge points. For example, as you make your way from the data center lobby to your servers, you may be required multiple times to swipe your magnetic identification card, type in a personal identification number (PIN), and pass an iris or fingerprint scan. A 7×24 on-site security team is tracking all activity using high-definition video, and video files are saved for validation purposes. Reports can be pulled to show who entered and left the facility and your data center area, and only authorized people are allowed on the data center floor.

9 Cloud-plus-data-center matrix

The public cloud is not all unicorns and rainbows, and a data center is something that needs to be understood and embraced. We’ve focused heavily on the benefits of the data center and the challenges of the cloud because too many conversations these days focus on the reverse. The public cloud is a wonderful tool and a huge enabler for many, if not most, of the common architectures and operations concerns of today’s IT designs. But many and most is far from all.

The data center is also a wonderful tool. Just as it’s incredibly useful to know the differences between a bus, a Volvo, and a Ferrari, it’s also an absolute requirement of any architect or VP of operations to know and individually embrace the benefits of public, private, and hybrid architectures.

In the chart below, we’ve created a matrix of which architectures lend themselves to which approaches and why.

RagingWire p 13 table

10 Key takeaways

In the rush to leverage the benefits of the public cloud, some organizations have forgotten one of the most useful tools at their disposal – the data center. A best-of-breed cloud-plus-data-center approach to host and serve your IT assets will increasingly involve hybrid deployments that independently leverage the agility and pay-as-you-go benefits of the public cloud and the cost savings and control that is possible with your own data center.

  • A one-size-fits-all approach (either all cloud or all in-house) could cripple your business. Successful IT strategies leverage both public cloud and data center assets.
  • Public cloud is, by definition, a commodity black box covering maybe the top 70 percent of common use cases and functionalities. Even if you can color inside of those lines, you’ll find many stacks cost less when hosted on your own hardware.
  • Areas where the public cloud shines are disposable environments (especially for dev and test or proof-of-concepting software), genericized high availability, backup and disaster recovery, generalized web product, object storage, sporadic data analysis (like MapReduce), and manic workloads.
  • Areas where data centers shine are mission-critical applications, costs of apps that require well-known ranges of computing power, control (especially at the networking layer) of being able to use specialized hardware and access to all of the functionality, tuning, and optimization required for high-throughput apps, security of being able to access best-of-breed hardware and software, and heavy steady-state batch analysis (like MapReduce) workloads.
  • The data center is still an incredibly useful tool that your company should leverage. Cost versus control versus security calculations must be done for every aspect of your specific application to determine which areas of which stacks lend themselves better to public or private hosting. Even in the same application, you may choose, for example, to leverage public cloud object storage to host and serve static assets while keeping the virtualized servers in-house.

11 About Rich Morrow

Rich Morrow is an Analyst for Gigaom Research and a 20-year open-source technology veteran who enjoys coding and teaching as much as writing and speaking. His current passions are cloud technologies (mainly AWS and Google Cloud Platform) and big data (Hadoop and NoSQL), and he spends about half of his work life traveling around the country training the Fortune 500 on their use and utility. He leads the Denver-Boulder Cloud Computing Group, as well as quicloud, a Boulder, Colo.-based cloud and big data consultancy.

12 About Gigaom Research

Gigaom Research gives you insider access to expert industry insights on emerging markets. Focused on delivering highly relevant and timely research to the people who need it most, our analysis, reports, and original research come from the most respected voices in the industry. Whether you’re beginning to learn about a new market or are an industry insider, Gigaom Research addresses the need for relevant, illuminating insights into the industry’s most dynamic markets.

Visit us at: research.gigaom.com.

13 About RagingWire Data Centers

RagingWire designs, builds, and operates mission-critical data centers that deliver 100-percent availability and high-density power. The company has more than 800,000 square feet of data center infrastructure in Northern California and Ashburn, Virginia, and is affiliated with the global network of 150 data centers operated by NTT Communications under the NexcenterTM brand. RagingWire’s patented power delivery systems and EPA ENERGY STAR-rated facilities lead the data center market in reliability and efficiency. With flexible colocation solutions for retail and wholesale buyers, a carrier-neutral philosophy, and the highest customer loyalty in the industry as measured by the Net Promoter Score®, RagingWire meets the needs of top internet, enterprise, and government organizations.

More information is available at www.ragingwire.com.

 

14 Copyright

© Knowingly, Inc. 2014. "Cloud and data centers join forces for a new IT platform for internet applications and businesses" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.

Tags