HCI is very popular with organizations of all sizes now, but it’s not perfect! A compromise must be found between performance, flexibility, usability, and cost. This applies to most types of infrastructure, hyper-converged infrastructure (HCI) included. What’s more, it is quite hard to find a single storage infrastructure that covers all use cases that need to be addressed.
HCI is a good solution for small organizations, but the architecture imposes some trade-offs, limiting its potential in larger enterprises where there is a sizeable and established environment to support. Put this way, HCI is the perfect example of the 80/20 rule, whereby 80% of the existing workloads are predictably dynamic and a good fit for HCI. The problem is the remaining 20% of your infrastructure is expected to grow exponentially with Artificial Intelligence/ Machine Learning (AI/ML), Internet of Things (IoT), edge computing projects and more —all of which organizations are evaluating now and which will impact business competitivity in the following years.
THE GOOD OF HCI
User-friendliness is what most organizations like about HCI. There is no need to change infrastructure operations, allowing the use of the same hypervisor the team is accustomed to and fewer complications around storage management. The infrastructure is simplified thanks to the modular scale-out approach for which each new single node adds more CPU, RAM, and storage. As a result of this, HCI delivers good TCO figures as well.
THE BAD OF HCI
The limitations of HCI arise from exactly what makes it a good solution for ordinary workloads and virtualization. In fact, it’s not really designed to cover all type of workloads (think about big data for example). In addition, not all applications scale the same, meaning that sometimes different types of resources are needed for your infrastructure (e.g. storage-only nodes for capacity). Last but not least, most HCI products on the market focus on edge or core use cases, but are not able to cover them both concurrently or efficiently at a reasonable cost. These limitations might be of secondary importance today, but in the long term that could radically change with new unforeseen performance, capacity, and technology requirements.
THE UGLY OF HCI
The initial investment to adopt HCI is often pretty high, especially if storage and server amortization cycles are different. Since HCI includes purchasing all server, storage, and networking together, some are forced to choose individual workloads to begin HCI adoption, or finance options, to purchase the entire infrastructure at once. This is merely a financial issue, but I’ve seen it happen several times with customers of all sizes while trying to manage their budget wisely. This can delay and slow down HCI adoption and result in a long transition period that benefits no one.
CLOSING THE CIRCLE
The perfect infrastructure that excels in every aspect does not yet exist. HCI is a good compromise for a lot of workloads but fails with the most demanding ones – and what is usually considered strengths can quickly become weaknesses.
Recently, I had the chance of being briefed by DataCore on their HCI solution. I really like their approach, and I think it could address some of the issues I discuss in this blog.
Originally posted on Juku.it