Hyper Convergence Poses Unique Challenges for SAN Technologies

0 Comments

With the move towards hyper-convergence in full swing, many organizations are faced with the challenge of moving their massive data stores into virtualized environments.  A situation that came to the forefront of discussion at VMworld 2016, where all things related to hyper-convergence were discussed ad nauseam.

Even so, many were still left wondering if it was even possible to have traditional storage technologies, such as SAN and NAS, effectively coexist in an environment that was transitioning into a hyper-converged entity. What’s more, the uncertainties of transition, driven by potential communications problems, performance issues and incompatibilities could force wholesale, expensive upgrades to support the move to hyper-convergence. An issue many network managers and CIOs would love to avoid.

Simply put, the move towards hyper-convergence, which promises improved efficiencies and reduced operating expenses, can be derailed by the high costs of transitioning to virtualized SANs. An irony worth noting. Never the less, those challenges have not stopped VMware Virtual SAN from becoming the fastest growing hyper-converged solution with over 3,000 customers to date. That said, there is still room for improvement, such as helping VMware Virtual SAN support even more workloads, and that is exactly where vendor Primary Data comes into play.

At VMworld 2016, Primary Data announced the availability of the company’s DataSphere platform, which brings a storage agnostic platform to virtualized environments. In other words, Primary Data is able to tear down storage silos, without actually disrupting the configuration of those silos. It accomplishes that by creating a virtualization platform that is able to mask the individual storage silos and present them as a unified, tiered storage lake, which is driven by policies and offers almost infinite configuration options.

Abstracting data from storage hardware is not a new idea. However, Primary Data goes far beyond what companies such as FalconStore and StoneFly bring to the world of hyper-convergence.  For example, DataSphere offers a single plane of glass management console, which unifies the management of across the various storage tiers, regardless of the storage type. What’s more, the platform goes beyond the concept of a SLA (Service Level Agreement) and introduces a new concept, aptly abbreviate as SLO (Service Level Objective). Primary Data’s Kaycee Lai, an executive with the company, explained to GigaOM that “SLOs are business objectives for applications. They define a commitment to maintain a particular state of the service in a given period. For example, specific write IOPS, read IOPS, latency, and so forth, to maintain for each application. SLOs are measurable characteristics of the SLA.”

Lai added “DataSphere will support DAS, NAS, and Object as storage types. Block level support for SAN will follow in the next release.” One of the key elements offered by the platform is the ability to work with storage tiers, without the disruption of having to rebuild storage silos. Lai added “Tiers are a logical concept in DataSphere. Tiers are simply a class of storage that is mapped to a particular SLO. The notion of having multiple tiers is not as important as having multiple objectives requiring the specific storage to meet those objectives. Customers can create as many objectives as their business requires.”

In the quest to make hyper-convergence common place, Primary Data smooths the bumpy storage path with several abilities, which the company identifies as:

  • Adapt to continually changing business objectives with intelligent data mobility.
  • Scale performance and capacity linearly and limitlessly with unique out-of-band architecture.
  • Reduce costs through increased resource utilization and simplified operations.
  • Simplify management through global and automated policies.
  • Accelerate upgrades of new solutions such as VMware vSphere 6 with seamless migration using existing infrastructure.
  • Reduce application downtime with automated non-disruptive movement of data.
  • Deliver a full range of data services across all applications in the data center.

 

 

Comments are closed.