The OpenStack cloud-management framework has come a long way since its launch four and a half years ago. Backed by several hundred of the IT industry’s biggest players, and with significant code releases every six months, the project consistently figures prominently when CIOs consider the ways in which their IT capabilities can best adapt and evolve to changing demands.
With all of OpenStack’s code freely available under an open-source license, it can appear straightforward to download the code and deploy an OpenStack cloud for internal use. Some, such as LivePerson, have done this and successfully run an internal cloud deployed across hundreds of hosts in multiple data centers.
But for many, the early cost savings and flexibility of homegrown deployments ultimately prove to be a false economy. Local configuration choices do not always benefit from the latest best practice in the broader OpenStack community. Local customizations and tweaks to the code gradually move the local installation further and further from mainstream OpenStack, and these problems usually grow with each subsequent release from the OpenStack Foundation. Although affordable to launch, homegrown OpenStack deployments may end up isolated from improvements to the mainstream code, increasingly expensive to patch and maintain, and potentially unsuccessful.
This report discusses some of the ways in which OpenStack projects are deployed. It explores lessons from across the industry in order to highlight emerging best practices that help ensure a successful and sustainable solution to local business requirements.
Thumbnail image courtesy of mgkaya/iStock.
- OpenStack: No Longer a Science Project
- 3 Ways to Deliver OpenStack Clouds
- Key Takeaways
- About Paul Miller
- About Gigaom Research