8 Comments

Summary:

The demand for fast, bare-metal servers will fade over time, especially as new powerful cloud resources come online, said Rackspace’s Scott Sanchez.

rackspace control panel

Rackspace is betting that many businesses that now use its bare metal servers will move up to its new more muscular Cloud Performance Servers, which will make their debut Tuesday morning at the OpenStack Summit.

Rackspace_Logo_08_07_2012[2]“Customers want performance — whether it’s in dedicated servers or in the cloud and the answer to date is they either write some fancy app code and put it on our cloud to scale horizontally or they manually installed it on our [dedicated] bare metal servers,” Scott Sanchez, director of strategy for Rackspace said in an interview. “They want performance but they want it on demand and they were unable to find that on our — or any cloud — until now.”

At last spring’s OpenStack Summit, there was a lot of talk about demand for bare-metal servers versus highly virtualized shared infrastructure for big data and some other applications. The performance of bare metal servers — minus virtualization — is much faster for those types of workloads. And of course, some highly regulated applications will continue to demand the use of dedicated, or non-shared resources.

But Rackspace is banking on the notion that there will be fewer of those applications over time. The new Performance Cloud Servers build on Intel E5 CPUs (up to 32 virtual CPUs per machine); up to 120GB of RAM and four 10-gigabit Ethernet connections to two top-of-rack switches; all SSD-storage; etc.

Sanchez said the reasons customers use dedicated bare-metal servers are fading over time and offerings like these new servers sport plenty of headroom for applications like large-scale NoSQL database work.

You’re subscribed! If you like, you can update your settings

  1. What’s interesting is that these “high performance” servers are actually going to replace the old architecture – all new servers will be on the new platform with higher throughput networking and in particular, SSD storage. Given the competition from Digital Ocean recently, this isn’t surprising and really puts Amazon’s low spec instances to shame.

    What’s really worth noting is the significant i/o performance you’ll get from this, as well as proper network throughput (which increases as you go up the instance types). But there still remains the uncertainty of other users of the host machine, which is why public cloud is always going to be risky for database use cases.

    1. David,

      It is a very valid concern. I work for Rackspace. We tried to minimize the uncertainty of performance degrading due to other hosts in the same machine in a number of ways:
      – We have lowered the oversubscription to VCPUs and in Performance 2 servers there is no oversubscription, which means your CPU power is somewhat guaranteed
      – On the network, every host receives 40Gbps of network capacity, it is quite unlikely multiple hosts on the same machine will try to use the network at the same time at max capacity
      – If you get one of the largest servers the machine is only for you, there is no noisy neighbor. I know, it is expensive and not feasible for most customers.
      – Even with high-perf networks, SSD drives and dedicated CPUS, there are other subsystems in a server that could theoretically become a bottleneck – so we ran some tests: we loaded an entire group of servers with a test to use all resources at max capacity and ten ran benchmarks on one of the hosts to see how performance would degrade in this unlikely scenario where all other hosts in the same server and group were maxing out their allocated resources and the results we saw are very encouraging – we hope to publish these results in our blog soon.

      In the meantime, you can find some benchmarks here http://developer.rackspace.com/blog/welcome-to-performance-cloud-servers-have-some-benchmarks.html

      1. Gerardo – thanks for the insight into the workings of RAX, which in turn raises a number of points:
        1. Over-subscription is how Cloud providers make their profits (ask the Telcos) – no oversubscription means falling profits – not something a preferred Cloud supplier should be stating.
        2. 40Gbs divided by the number of hosted instance/guests (even without over-subscription) could well be 1GBs (or less) per instance.
        3. Every “system” has a bottleneck – always.

        Looking forward to reading about RAX innards in more detail ….. thanks.

        1. Steve,

          Thanks for your comment.

          Yes, there are less profits when you don’t oversubscribe. It is like when airlines oversell seats knowing some people won’t show up. Yet we don’t use oversubscription in performance 2 Cloud servers to deliver consistent performance. We think customers will appreciate it, so it should be good for customers and for profits.

          I agree, every system has a bottleneck and when you solve one, you have to work on the next. You may find actual benchmarks more valuable. Here is a customer who is seeing 2,500 transactions per second in SQL server consistently with one server: http://www.rackspace.com/blog/new-cloud-servers-drive-more-than-6x-performance-boost/

          http://www.rackspace.com/blog/new-cloud-servers-drive-more-than-6x-performance-boost/

    2. Exactly, big powerful connected machines, but still shared machines.

  2. Comes down to reliability really

  3. Translation: we still oversubscribe and we still pretend that it is possible to put two tons of dirt into a one ton truck as long as the truck is painted pink!

    1. My apologies if I was not clear: there is NO oversubscription in Performance 2 cloud servers. You get dedicated CPU resources

Comments have been disabled for this post