GigaOM Research projects that the cloud computing market will grow from $70.1 billion in 2012 to $158.8 billion in 2014.
This adoption comes with a compensatory need for sustainable performance from cloud service providers. Increases in cloud service create a parallel increase in the number of internet users. These users have a growing expectation about improved response times that is augmented with an ever-expanding trust and reliance on the internet. However, this increased performance cannot be sustained if the corresponding cost to the service provider (SP) for delivering this performance also increases.
What service providers need is a way of delivering low latency, fast response, and increasing performance while minimizing the cost of the network.
This research paper will examine the stress points in a cloud infrastructure and the available network options. Among the key points covered are:
- How networks have realized the performance that can be gained from clustering and parallelism demanded from a server farm today
- The need for performance improvement in both bandwidth and IOPS
- One way data can travel across a network of compute and storage capacity with minimal latency and as close to the speed of the CPU as possible
- Why storage acceleration is critical
- What is causing the rise of the convergent data center
- Three high-performance network infrastructures that can satisfy the criteria of predictability and repeatability