7 Comments

Summary:

Cell phone chips just became more appropriate for server workloads, as ARM released a 64-bit version of its low energy processor. And the first company to take advantage of the new design looks to be AppliedMicro, which will build servers for webscale environments.

compsappliedmicro

Cell phone chips just became more appropriate for server workloads, as ARM said it would offer a 64-bit version of its low-energy processor Thursday. The first company to take advantage of the new design looks to be AppliedMicro, which said on Thursday it will expand its embedded systems business by making servers aimed at the cloud and webscale companies.

ARM is knocking on the data center door.

By now, many readers are familiar with the challenges faced by webscale data center operators, who are running tends of thousand s of servers and are concerned about rising energy costs. The ability to eke out the highest performance for each watt of power has become a crucial metric. In some cases, they may not even need the gigahertz monsters that Intel has offered with Nehalem chips, because their processing workloads are smaller.

This shift in computing has led to a plethora of startups such as Tilera, a massively multicore chip company, Calxeda, a startup that will reportedly soon announce a newer version of ARM chips used in servers from HP, and SeaMicro a company building low-power servers using Intel’s Atom chips. Other industry players such as Marvell, Nvidia and Via are also getting into the server market.

In an interview Jim Johnston, senior director of marketing at AppliedMicro, explained that Applied wants to use these 64-bit ARM processors to deliver up to 3 gigahertz per core in systems that can use between two and 128 cores. At the same time, the goal is to deliver that processing power without sucking watts of energy. ARM and a special chip that governs the operations of the multicore chips are its answer to managing power, security and perhaps enabling a host of other uses for the cores.

AppliedMicro wants to join the webscale party.

Johnston explained that the value of AppliedMicro architecture will be that it can offer single-threaded multi-gigahertz performance using the ARM architecture. Companies like Marvel or Calxeda appear to be using multiple cores with less processing power to deliver higher performance. In this way the AppliedMicro version of the ARM chip looks more like a rival to Intel’s lower power Sandy Bridge chips it announced this year.

While the chips using the newly announced ARM architecture won’t be sampled until mid-2012 and then not in full production until 2013, there are already companies eyeing the use of ARM-based servers. Efforts such as AppliedMicro’s, which got into the chip business after buying IBM’s PowerPC business in 2004, may help convince folks that their data centers don’t always have to be an x-86 world. In the process, AppliedMicro might see its own business expand from the embedded world to servers and high-end telecommunications gear.

  1. Interesting. Would one explanation for the power savings be that RISC processors employ less transistors (and other logic devices) per operation than CISC processors? One obvious problem is all the logic to support all the microcode in the CISC processor, whereas RISC processors have little or no microcode architecture to support. If this is true, then in the long run it would be insane to run large server farms on x84 or x64/AMD64 architectures as you are wasting all your power and overhead costs on heating up the room.

    Not sure why IBM have not flogged this angle before?

    Excellent article – awesome insight!

    Share
    1. There is no name on the results slide to indicate that who produced it. There are no ARM results on the SPEC.ORG page. The slide gives the impression that the ARM results are official and measured rather than marketing predictions for 2012 and beyond.

      There are SPARC RISC results on the SPECINT_RATE2006 but the Westmere/Sandybridge results are better.

      I would temper your glow for this article until they are able to make silicon and actually run SPECINT_RATE2006. By then, they will be compared against the Ivybridge or even Haswell results.

      Eric, save this link and return when they have silicon.

      Share
  2. On the part where they say there is a 3x gap, they are wrong about the E3-1220L numbers. The 1220L gets 60 in SpecIntRate2006 benchmark, not 40.

    Share
    1. It is worse than using wrong numbers for competitors. The slide itself violated the SPEC Fair Use rules. At a minimum, the slide should clearly identify that the ARM results are estimates. Until the slide is corrected with more description, the data is simply not valide.
      http://www.spec.org/fairuse.html#Estimates

      Share
  3. Eric Kolotyluk Friday, October 28, 2011

    Good points everyone – critical data is missing

    Share
  4. AMCC are talking about mass production of server chips in 2013. ARM on the other hand are talking about prototype products in 2014 using as yet unnamed Cortex processors.

    I have not heard of AMCC before, but they must be a serious company if they were able to acquire all the IP for embedded PowerPC from IBM.

    Their architecture license for ARM appears to be recent; anybody know about previous ARM products? They must have acquired alot of ARM skills from somewhere to achieve the working FPGA emulation by the day of the ARM 64-bit announcement.

    All seems very mysterious, but exciting.

    Question for people who know a lot more than I do: what is the relationship between the working FPGA emulation andsilicon tape-out?

    Also ARM sound quite pessimistic about how long it will take to get products to market. Can anyone explain why this should involve a lot more work than the move to 40 bit addressing?

    Share
    1. I think some articles were mentioning that 64-bit ARMs were 4-wide decode. That suggests an architectural change from 2013’s A15 chip, which has the 40-bit address extension.

      Share

Comments have been disabled for this post