6 Comments

Summary:

I wrote about an effort us use millions of specialized embedded processors to build an energy-efficient (relatively) supercomputer that could run at speeds of up to 200 petaflops over at Earth2Tech. The Department of Energy’s Lawrence Berkeley National Laboratory has signed a partnership with chip maker […]

I wrote about an effort us use millions of specialized embedded processors to build an energy-efficient (relatively) supercomputer that could run at speeds of up to 200 petaflops over at Earth2Tech. The Department of Energy’s Lawrence Berkeley National Laboratory has signed a partnership with chip maker Tensilica to research building such a computer, but after chatting with Chris Rowan, Tensilica’s CEO, I wonder if more specialized computing tasks in the data center might be farmed out to highly customizable — but lower-powered — chips.

Rowen doesn’t think the data center is at the point yet where power consumption costs outweigh the benefits of using a cheaper x86 processor, but said that day might come, especially for very specific uses such as accessing web databases. In the meantime, he’s focusing on getting customized embedded cores in applications that rely on speed, such as routing. Cisco uses Tensilica cores in its recently launched QuantumFlow Processor, primarily as a way to boost speeds. As the web gets faster, general-purpose x86 chips have to work harder and hotter, so a return to specialized, low-power processors may be in the cards.

Computing hardware and services tend to run in cycles, and right now, I think the hardware and networks put in place in the late 90s, which allowed Web 2.0 and rich Internet applications to flourish, are hitting their limit. The IP and IT networks are in the early stages of stepping up to challenge of delivering the next generation of services, but unlike the last cycle, power consumption will join speed as an essential feature for the underlying silicon.

  1. [...] go green now that the power consumption is getting measured. One way could be through the use of lower-powered embedded chips being tested as part of a research project on low-power supercomputers at Lawrence Berkeley [...]

    Share
  2. [...] is working with researchers at the Department of Energy’s Lawrence Berkeley National Lab to build a supercomputer made up of millions of their configurable cores. The chief advantage in using DSPs is that they are very power efficient. So as “performance [...]

    Share
  3. [...] into data centers and high-performance computing. Other chip vendors such as Texas Instruments or Tensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs would agree. Even data center operators are experimenting with [...]

    Share
  4. [...] data centers and high-performance computing. Other chip vendors such as Texas Instruments or Tensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs would agree. Even data center operators are experimenting with [...]

    Share
  5. [...] of computing. Since I write about netbooks with ARM-based chips, using DSPs to build things like a low-power supercomputer at Lawrence Berkeley National Lab, and computers taking advantage of GPUs or architectures like IBM’s Cell for parallel [...]

    Share

Comments have been disabled for this post