10 Comments

Summary:

The cloud promises to change the way businesses, governments and consumers access, use and move data. For many organizations, a big selling point in cloud infrastructure services is migrating massive data sets to relieve internal storage requirements, leverage vast computing power, reduce or contain their data […]

structure_speaker_series2The cloud promises to change the way businesses, governments and consumers access, use and move data. For many organizations, a big selling point in cloud infrastructure services is migrating massive data sets to relieve internal storage requirements, leverage vast computing power, reduce or contain their data center footprint, and free up IT resources for strategic business initiatives. As we move critical and non-critical data to the cloud, reliable, secure and fast access to that information is crucial. But given bandwidth and distance constraints, how do we move and manage that data to and from the cloud, and between different cloud services, in a cost-efficient, scalable manner?

Providers and consumers of cloud services should acknowledge that large distances between data and their applications result in latency, which is not typically found within local area networks. Cloud infrastructure services provide rapidly scalable architectures that can offer support to internal applications, without taxing or waiting for internal enterprise resources. But the promise of significant productivity gain is weakened when it becomes a labor-intensive and time-consuming task to move massive amounts of data into the cloud (hundreds of GBs or TBs). Additionally, if accessing the data is slow and cumbersome for the end user, it becomes a losing value proposition, for the cloud provider, the company and its end-user base.

Traditional transfer methods have never worked well with large volumes of data, especially at longer distances. In fact, they are so bad, some leading cloud providers have recently offered their customers to ship hard drives rather than transferring files over the network. The performance limitations of ubiquitous transfer applications such as FTP or HTTP are a direct result of the inherent bottlenecks of TCP, the traditional protocol used to reliably transfer data over IP networks. On any type of network, packet loss occurs at varying rates; FTP deals with it in an extremely unsophisticated manner: An entire window of data is resent, and the transfer starts again with a self-imposed slower speed. What’s more, FTP transfer times are further exacerbated over long distances. Ask any IT executive or digital media manager who used FTP to move large data over any distance, and they’ll tell you how painfully slow and unreliable it can be.
Michelle_Munshon_New
Fortunately, leaps and bounds have been made to combat data transfer issues, and such solutions can be easily integrated into cloud services. When adopting cloud services, optimizing wide area network bandwidth use should be at the top of anyone’s checklist. Cloud providers are uniquely poised to significantly increase the rate of adoption of their services by offering alternatives to CIFS, NFS, FTP and HTTP that truly optimizes bandwidth use.

The cloud holds the power to change the way businesses interact and manage their data. Before adopting cloud services, customers would be wise to evaluate the time, and therefore money, lost with slow and inefficient data transfer and how that might affect their infrastructure objectives. Latency induced by traditional transfer technologies is more than an aggravating side effect; it is fundamentally detrimental to business and productivity. With the advent of next-generation digital transfer technology, organizations can now fully realize the scalability potential and cost savings offered by cloud services.

Michelle Munson is president and co-founder of Aspera.

  1. What the hell is this crap? Was this written in 1996 and freshened up for today? I’m sorry, but this is the most obvious and vulgar jargon-filled snakeoil value proposition piece I’ve ever read. This is the piece you show to executives who don’t know the first thing about technology when you want to sucker them into buying some obscenely expensive package. For a professional who is immersed in technology and has seen all of the hucksters come and go, this rates a C-.

    Share
  2. I find this article quite interesting and informative. I am not sure what Mr “Barack Obama-really!” is talking about. The transfer speed is a big issue for any cloud site and needs to be resolved. FTP spec is much older than internet and needs to be replaced by something more efficient.

    Share
  3. “Barack Obama” couldn’t have it more wrong. There’s a reason that the cloud provider was offering “shipped physical media” as a way to get data into the cloud… traditional file transfer just doesn’t cut it in the first place (FTP and HTTP as TCP-based protocols being the worst), and this problem is exacerbated by poorly-performing clouds.

    There are few successful companies providing file transfer alternatives to technologies like FTP, and the cloud providers need to begin devising a way to integrate those replacements into the core of their offering. “Cloud storage” providers expecting someone to upload terabytes of data–using methods like FTP–is the part that’s dating back to 1996.

    Whether on the cloud or not, enterprises need to look into the alternatives… yes, they cost money (sophisticated technology isn’t an accident), but they’re increasingly necessary with the volumes of data people handle these days.

    Share
  4. Barack Obama Sunday, June 21, 2009

    These are fantastic shill comments, but they can’t distract from the fact that if your organization is stupid enough to rely on FTP and naive enough to try to bandaid it with file storage, then your organization’s problem isn’t how to handle data, it’s how to handle incompetence.

    Let me put it this way: if you showed this article to an engineer at a company who takes data transfer and bandwidth seriously (say Google), you’d get laughed out of the room for even presenting this laughably bad “solution.”

    Here’s why this article is pure snakeoil: it doesn’t provide you with any useful information about *how* to approaches the problem. It simply states that its “solution” will solve your problems and make your life better. Like I said before, it’s the value proposition. Not only is this approach willfully dishonest, it’s completely unethical. If you want to make good decisions, get the facts, and judge their value on your own. Don’t rely on some salesperson hiding in writer’s clothes to make the value judgement for you. That’s just plain stupidity.

    The fact is that cloud storage provides absolutely no real cost savings anywhere. Why? Because the minuscule savings you achieve in bandwidth performance is fizzled away by the cloud service costs, not to mention the cost increases in IT deployment and maintenance procedures.

    Never mind that the actual usability of cloud storage is very poorly thought through, no matter who you use for a provider.

    Have you ever personally used a cloud solution to transfer data? What’s the first headache you run into? Oh look, you have to use some lame web interface, or some poorly written Explorer add-in, or do a file-drop to some strange network location. All you’re doing is asking your users to trade-in the familiarity of FTP for a new complicated procedure that saves them absolutely nothing. If it’s a web interface, you still have to hit the browse button (there’s no drag & drop). If it’s an explorer add-in, it typically doesn’t replicate all of the explorer functionality users need, and it almost never plays nicely with other Explorer plug-ins. If it’s a drop location, prepare for endless help desk calls as users ask, “how do I know it went through? what’s happening? I don’t understand this?”

    Also, in reference to 1996, I’m laughing at the fact that this article references latency as some sort of salient point. Seriously? You’re attacking the problem by arguing latency? That’s like me attacking the age of the Earth by citing flaws in carbon dating. It’s a straw man argument that you should be ashamed of.

    This sales pitch for cloud storage is a joke. Anyone who tries to sell you on the cloud by only telling you how great your life will be after is a thief. Anyone who falls for it is incompetent.

    Share
    1. D00d for all this ferocity of your opinion, you aren’t even secure enough to disclose who you are and what you stand for? I am sure many folks would love to have based discussion on this topic.

      What Aspera is doing is to present a point of view, which you have full right to refute. All they are saying is their President will talk about this position at an upcoming conference.

      At any rate, would you agree that data is being produced many times faster than the network speeds? And as this data being generated is increasing, it needs to be analyzed for which it need to be stored so interesting queries can be run on it.

      In expressing you disapproval, you have gone from economics of cloud storage, economics of cloud itself, then UI (??)…. could it be possible that this pain point is broad enough that Aspera may be addressing it and solving at least one aspect of the cloud storage economics – getting the data in a fast and reliable manner.

      Share
      1. Mr President, his mode of expression notwithstanding, is right about the folly of substituting one cost for another while adding complexity at the user level. That is a doomed proposition.

        At some level, everybody is right on the issue but there is a natural tendency to stray from the fundamentals. Will anybody argue that network latency is not a challenge? Will anybody argue that packet loss is not a problem? Our experience (and yours, too, whether you make the connection or not) is that the negative effects of packet loss are actually growing as real-time, cloud, and virtualized applications generate increasing demand for good network performance and quality. And our experience also shows that it is entirely possible to all but eliminate the packet loss that is endemic to best efforts networks while adding no latency of any consequence. This will prove a boon to some cloud apps.

        This much seems obvious: We all want to be using best efforts networks before paying for the ‘fat pipes’ of dedicated bandwidth. In fact, we all need to be going over-the-top of the public Internet for many of our networked business applications. At IPeak Networks, we believe Aspera has the right idea and we applaud every effort to make the cloud make better business sense. IPeak Networks happens to be oriented on making the best use of the lowest cost and most readily available network services but there is certainly room for solutions at both ends of the economic spectrum.

        Share
  5. [...] Munson, the pres­i­dent and co-​founder of Aspera, writes about the chal­lenges and poten­tial of the cloud.  “Ask any IT exec­u­tive or dig­i­tal media man­ager who used FTP to move large data [...]

    Share
  6. [...] Avoiding Latency in the Cloud (gigaom.com) [...]

    Share
  7. [...] used by media companies for transporting huge video files from New York to Los Angeles and even for sending data to the cloud. Its move to the iPhone was something CEO and Co-founder Michele Munson talked to me about last [...]

    Share
  8. [...] protocols that are open and standardized might make sense.  Look at companies like Aspera, which is offering a proprietary protocol to shift huge volumes of information between data [...]

    Share

Comments have been disabled for this post