GigaOm Sonar for Composable Infrastructurev1.0

An Emerging Technology Insight Report

Table of Contents

  1. Summary
  2. Report Methodology
  3. Overview
  4. Considerations for Adoption
  5. GigaOm Sonar
  6. Market Landscape
  7. Near-Term Roadmap
  8. Analyst’s Take

1. Summary

Unlike traditional infrastructure designs, which are based on servers that have a finite quantity of local resources such CPU, GPU, RAM, network connectivity, and storage, a composable infrastructure is organized in disaggregated resource pools that can be reconfigured quickly depending on momentary application needs (see Figure 1). This approach enables users to maximize their data center resources and increase flexibility and agility while making resource utilization and scalability more granular. In the end, Infrastructure composability is much more efficient, resulting in a better overall TCO.

Figure 1. Comparison Between Traditional and Decentralized Storage Systems

As the recent COVID pandemic revealed, businesses need to react quickly to changing scenarios—and IT flexibility is the key to that response. What’s more, efficiency is now the main principle behind new infrastructure design. Better efficiency means higher density, lower power consumption, faster performance, and improved TCO. To achieve these results, most enterprises are working to build effective hybrid cloud strategies that are based on both the flexibility of the public cloud and the efficiency of on-premises infrastructure. In this context, composability will play a big role for both cloud providers, who need to keep costs down even as they increase the flexibility of their infrastructure and associated services, and enterprises that want to cut costs and maximize efficiency in their data centers.

In particular, for large-scale enterprise data centers, the economy of scale is no longer enough if it’s not associated with the flexibility necessary to reconfigure the infrastructure according to ever-changing business needs. In fact, enterprises now run an increasing number of workloads that need totally different types of resources, and most of these workloads do not run all the time. For example, an enterprise might have a VDI farm that runs during business hours and then reallocate resources for other tasks during nights and weekends to run batch jobs for ERP and MRP, data warehouses, big data analytics, R&D tasks, ML training, and so on.

How We Got Here

The concept of composability is as old as modern IT infrastructures and the goal has always been the same: to increase efficiency. Mainframes and large Unix servers used partitioning, in which a large machine could be sliced into smaller physical or logical partitions, each with an associated set of resources, operating system, and applications. With the advent and success of smaller and cheaper servers, virtualization became a common technique for optimizing server use. Later, thanks to hyperconvergence, we saw the first attempts to use remote resources (mainly storage) over the network to improve efficiency at a larger scale.

Unfortunately, reaching rack- or data center-scale resource configuration granularity for compute, memory, network, storage, and accelerators was not possible because the technology was not yet available. Actually, it is still not yet possible to physically separate memory from the CPU, but the industry is now just about ready to make this step. The main issue is that the PCIe bus was originally designed to connect the internal components of a single server, not to be extended with external connections.

Recently, however, some vendors have worked around this limitation and built proprietary PCIe bus extensions to share resources across multiple servers. Even better, a new specification (Computer Express Link—CXL), a cache coherent interconnect based on PCI Gen 5.0, has emerged to make PCIe bus extension a standard. The vendor workaround products are already available while those based on the specification are still under development.

About the GigaOm Emerging Technology Impact Report

This GigaOm report is focused on emerging technologies and market segments. It can help organizations of all sizes to understand a technology, its strengths and weaknesses, and its fit within an overall IT strategy. The report is organized into four sections:

  • Technology Overview: An outline of the technology, its major benefits, possible use cases, and the relevant characteristics of different product implementations in the market.
  • Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology into an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration of how to integrate the new product into an existing environment.
  • GigaOm Sonar: A graphical representation of the market and its most important players, focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.
  • Near-Term Roadmap: A 12-18 month forecast of the future development of the technology, its ecosystem, and major players in this market segment.

Full content available to GigaOm Subscribers.

Sign Up For Free