Avoiding Latency in the Cloud

structure_speaker_series2The cloud promises to change the way businesses, governments and consumers access, use and move data. For many organizations, a big selling point in cloud infrastructure services is migrating massive data sets to relieve internal storage requirements, leverage vast computing power, reduce or contain their data center footprint, and free up IT resources for strategic business initiatives. As we move critical and non-critical data to the cloud, reliable, secure and fast access to that information is crucial. But given bandwidth and distance constraints, how do we move and manage that data to and from the cloud, and between different cloud services, in a cost-efficient, scalable manner?

Providers and consumers of cloud services should acknowledge that large distances between data and their applications result in latency, which is not typically found within local area networks. Cloud infrastructure services provide rapidly scalable architectures that can offer support to internal applications, without taxing or waiting for internal enterprise resources. But the promise of significant productivity gain is weakened when it becomes a labor-intensive and time-consuming task to move massive amounts of data into the cloud (hundreds of GBs or TBs). Additionally, if accessing the data is slow and cumbersome for the end user, it becomes a losing value proposition, for the cloud provider, the company and its end-user base.

Traditional transfer methods have never worked well with large volumes of data, especially at longer distances. In fact, they are so bad, some leading cloud providers have recently offered their customers to ship hard drives rather than transferring files over the network. The performance limitations of ubiquitous transfer applications such as FTP or HTTP are a direct result of the inherent bottlenecks of TCP, the traditional protocol used to reliably transfer data over IP networks. On any type of network, packet loss occurs at varying rates; FTP deals with it in an extremely unsophisticated manner: An entire window of data is resent, and the transfer starts again with a self-imposed slower speed. What’s more, FTP transfer times are further exacerbated over long distances. Ask any IT executive or digital media manager who used FTP to move large data over any distance, and they’ll tell you how painfully slow and unreliable it can be.
Fortunately, leaps and bounds have been made to combat data transfer issues, and such solutions can be easily integrated into cloud services. When adopting cloud services, optimizing wide area network bandwidth use should be at the top of anyone’s checklist. Cloud providers are uniquely poised to significantly increase the rate of adoption of their services by offering alternatives to CIFS, NFS, FTP and HTTP that truly optimizes bandwidth use.

The cloud holds the power to change the way businesses interact and manage their data. Before adopting cloud services, customers would be wise to evaluate the time, and therefore money, lost with slow and inefficient data transfer and how that might affect their infrastructure objectives. Latency induced by traditional transfer technologies is more than an aggravating side effect; it is fundamentally detrimental to business and productivity. With the advent of next-generation digital transfer technology, organizations can now fully realize the scalability potential and cost savings offered by cloud services.

Michelle Munson is president and co-founder of Aspera.

You're subscribed! If you like, you can update your settings


Comments have been disabled for this post