Bandwidth, throughput, and latency aren’t issues when you are within the boundaries of a data center, but things drastically change when you have to move data over a distance. Applications are designed to process data and provide results as fast as possible, because users and business processes now require instant access to resources of all kinds. This is not easy to accomplish when data is physically far from where it is needed.
In the past decade, with the exponential growth of internet, remote connectivity, and, later, large quantities of data, lack of bandwidth has become a major issue. A first generation of wide area network (WAN) optimizing solutions appeared in the market with the intent of overcoming the constraints of limited bandwidth connectivity. Sophisticated data-reduction techniques like compression, deduplication, traffic shaping, caching, proxying, and so on were integrated to minimize traffic between data centers and branch offices or for DC-to-DC communication. WAN optimization can contribute significantly in improving the quality and the quantity of services delivered to branch offices, replicate storage at longer distances for disaster recovery (DR) or business continuity (BC), reduce WAN costs, and improve mobile connectivity.
Recently things have changed significantly. Traditional WAN optimization was mainly conceived for solving lack of bandwidth in a time when legacy protocols were designed for local area network (LAN) connectivity. Data was neither compressed nor encrypted, and computers were unable to manage huge amounts of complex data. Now things are the other way around: High-bandwidth links (10Gbs or more) are considerably cheaper than in the past, new protocols are emerging, data is often compressed and encrypted at the source, and even mobile devices can concurrently manage multiple huge data streams. Traditional WAN optimization was simply not designed to efficiently manage these new requirements . Efficiency, utilization, and latency are the real issues now.
Next-generation WAN optimization, designed with a radically new philosophy, has the right characteristics to offer unprecedented scalability, better latency management, and uncompromised link utilization.
One new approach, rooted in a deep knowledge of storage DNA, looks at the problem in a radically different way compared to what we’ve seen before. It addresses the problem by keeping in mind modern types of data (compressed and encrypted) and focusing on mitigating latency issues while maximizing efficiency and predictability at scale. The result is overall TCO improvement, better latency management, and an outstanding utilization rate of high-bandwidth links.
Key highlights from this report include:
- Traditional WAN optimization is not sufficient for dealing with high-performance WAN connectivity and modern types of data. As the amount of stored data grows, widely used compression and encryption techniques offset traditional WAN optimization efficiencies.
- Efficiency — both in utilization rate and latency mitigation — is now key. Next-generation WAN optimization is a new concept designed with scalability, efficiency, and TCO in mind.
- Next-gen WAN optimization provides benefits similar to traditional WAN optimization but with improved TCO, freedom to scale and the ability to prepare the infrastructure for new needs (such as cloud, object storage, and mobile).
Thumbnail image courtesy of scanrail/Thinkstock.
- Primary use cases for WAN optimization
- Business benefits of WAN optimization
- Traditional WAN optimization: It’s all about perception
- New challenges need a paradigm shift
- Looking at next-generation WAN optimization
- Implementing next-generation WAN optimization
- Key takeaways
- About Enrico Signoretti
- About Gigaom Research