Kubernetes Backgrounder: What to Consider Before Deployment

An Introduction to Kubernetes Container Orchestration

Enterprises of all sizes are embracing hybrid cloud strategies that are becoming more complex and structured. But they want the flexibility to choose where applications and data should run dynamically, depending on business, technical, and financial factors.

The use of Kubernetes brings this vision within reach of most organizations, but you need the right integration with infrastructure layers, such as storage, to make it happen.

How should organizations set up their storage infrastructure to make using Kubernetes—and the enterprise efficiencies it enables—achievable?

Below, we look into what makes Kubernetes so powerful and important to today’s enterprises, and outline the structures and technology needed to create the perfect setup for this technology.

The Drive Toward Hybrid Solutions

Every enterprise faces different challenges when deploying business-critical systems. A common problem is lock-in, where the selection of certain business systems creates conflicts with other technologies used by the business. Interoperability can be compromised if certain proprietary systems are used that, for example, result in costly workarounds.

Not surprisingly, enterprises are keen to avoid this, though they want to use similar feature sets across different on-premises and cloud infrastructures.

They often choose the public cloud for its flexibility and agility, though on-premises infrastructures can still be a better option in terms of efficiency, cost, and reliability. In this type of scenario, it is highly likely that development and testing would take place on the public cloud, while production could be on-premises, in the cloud, or both, depending on the business, regulatory, economic, and technical needs of the particular enterprise.

Containers and Microservices Changed Everything

Containers, and microservices in general, have radically changed how applications are developed and deployed. Applications and operating system dependencies, like system libraries and configurations, are now abstracted from the actual operating system.

A container image can be deployed easily, updated, and moved quickly when necessary, thanks to its small footprint. Developers have embraced this model for new applications, and many organizations have started refactoring older ones when possible. This trend has contributed to the rapid adoption of containers in enterprise environments, resulting in an increased demand for enterprise-grade solutions that can work well with existing traditional infrastructures.

More and more stateful applications—databases or key-value stores—are migrating to these platforms, requiring additional resources and performance.

Why Kubernetes Matters

The trend towards containers, and the advantages of using them, has catapulted Kubernetes into a position of real importance in enterprise technology. This is because Kubernetes is a container orchestrator. Applications are organized in sets of containers (called pods), and the orchestrator continuously works to assure that enough resources are allocated to provide the level of service required by the application and its users. This includes application availability, load balancing across the infrastructure, and scalability.

To do this, and because of the nature of microservices themselves, containers are frequently spun up and down or moved to different locations across the cluster. The number of operations in a large cluster could be massive, not only in terms of IOPS but also for fast resource provisioning and deallocation. These types of workflows (system operations) and workloads (IO operations on several small data volumes) put a lot of pressure on the storage system, which can quickly become a bottleneck and undermine the overall performance and reliability of the entire infrastructure.

CSI: Exposing Storage

The interface between the containers managed by Kubernetes and the underlying storage infrastructure is obtained through Container Storage Interface (CSI). CSI is a standard, developed with the goal to expose block and file storage systems to containerized workloads on container orchestration systems, including Kubernetes.

By adopting CSI, third-party storage vendors can write and deploy plug-ins that expose their storage systems in Kubernetes without ever having to touch the core Kubernetes code, as illustrated in Figure 1. CSI gives Kubernetes users more options for storage and makes the system more flexible, secure, and reliable. Similar interfaces are available for networking, the Container Network Interface (CNI) and for container runtimes, the Container Runtime Interface (CRI).

Figure 1: How Kubernetes Manages Containers

Deployment Considerations

Kubernetes can help to simplify and manage a complex storage environment with multiple applications. But deployment of a Kubernetes ecosystem requires infrastructure to be taken into account and examined carefully.

What should organizations consider before deciding to implement Kubernetes? Three key factors are:

  • Persistent and reliable data storage – Efficient use of Kubernetes depends on the number of operations the storage system can handle at the control plane level. Fast resource provisioning and removal are crucial to sustaining Kubernetes requests.
  • Data management – Kubernetes is designed to support applications with specific resiliency and availability characteristics, so operations are more focused on application and data management levels, and less on the physical infrastructure layer.
  • Security – With data and applications moving across on-premises and cloud environments, it is important to maintain a consistent set of security features across different infrastructures.

The goal is to provide a common data storage layer that is abstracted from physical and cloud resources, with a standard set of functionalities, services, protection, security, and management.

Conclusion

Kubernetes has been a game-changer: Combined with the power of containers, it has created the perfect tool to manage modern hybrid environments, incorporating the different storage and software solutions used by most modern enterprises.

Setting up the storage solutions correctly can help to increase the efficiency of Kubernetes and the effectiveness of business-critical systems.