Does Kubernetes require a special data storage and management system? Considering that traditional storage solutions are falling short, this may be the case. As enterprises try to force-fit Kubernetes with traditional storage solutions they may find themselves in a “square peg versus round hole” situation. Consider the existing enterprise investment in storage technology. Using the same storage technology currently running VM workloads, Kubernetes is considered the path of least resistance and least risk. But is it?
It turns out that running stateful services such as databases on Kubernetes requires specific container-native storage systems. Without this technology, certain issues will become commonplace: stuck volumes, downtime, overprovisioning, lost data, and manual backups and migrations. There is a clear case to be made for moving away from traditional storage and management systems. So, if container-native storage is the way to go, how do you select the right solution?
In this report-based webinar, GigaOm Research analyst David Linthicum and guest Michael Ferranti, VP of Product Marketing at Portworx, walk you through the key steps to understanding your own organizational and technical requirements. Expect to explore available solutions, including the tradeoff between traditional storage and Kubernetes-native storage, and to examine performance, cost, resilience, and security in light of business continuity/disaster recovery (BC/DR) and operational use cases.
In this 1-hour webinar, key questions will be answered:
- What is the best way to manage self-service? – How do we master the ability to provision and manage the container-granular storage as needed, on-demand? Kubernetes-native storage provides agility and availability, while the lack thereof within traditional systems, often hinders developers and enterprise development.
- What does it mean to be application-aware? – Kubernetes based applications are composed of multiple containers that run across a fleet of hosts. This means that storage operations such as backup, recovery, DR, and encryption must be applied at the application-granular level, not the machine level, as is the case with traditional storage. Additionally, you must be able to manage groups of applications today using concepts such as Kubernetes namespaces.
- How does automation play? – Automation plays as a layer that can carry out the details around storage implementation, as well as other operations that need to occur, such as BC/DR. Automation removes most manual processing and provides key advantages. The lack of automation can lead to errors and a lack of consistency. Automating much of operations, including auto-provisioning and scaling, with the objective of providing a self-healing infrastructure. Meaning that resources such as storage and compute can recover from simple errors, such as resizing a container volume when it is out of capacity, even if this means provisioning additional physical storage.
- How do we leverage an infrastructure-agnostic SLA for Kubernetes? – a consistent level of service no matter which infrastructure is being leveraged, or, not binding SLAs directly to any portion of the infrastructure, such as platform services.
- How can we optimize costs when leveraging Kubernetes? – Put processes in place that can improve cost efficiencies. This includes automation of provisioning systems to ensure that no more storage is being allocated than needed, and cost governance built into core development and operational processes.
Discover what you need to know now about Scalable Data Storage for Kubernetes, explore what the public cloud will be like in both the short and long term, and learn where to invest your time today, as well as in the future.
Register now to join GigaOm and Portworx for this free expert webinar.
Who Should Attend:
- Chief Architects or Chief Engineers
- Product or Solution Architecture Leaders
- Enterprise, Solutions, Infrastructure or Cloud Architects
- Software or Infrastructure Engineers
- Product Management Leaders