9 Comments

Summary:

Texas Instruments is looking to hop on the trend of using non x86 processors in the data center, according to Kathy Brown, general manager of the company’s wireless base station infrastructure business. Last night over dinner, Brown said the wireless chip powerhouse was trying to build […]

hdr_ti_logoTexas Instruments is looking to hop on the trend of using non x86 processors in the data center, according to Kathy Brown, general manager of the company’s wireless base station infrastructure business. Last night over dinner, Brown said the wireless chip powerhouse was trying to build a software framework that would enable researchers to run Linux on its high-end digital signal processing chips (DSP) used for scientific computing.

The idea of using DSPs is not new. Tensilica, a DSP core company, is working with researchers at the Department of Energy’s Lawrence Berkeley National Lab to build a supercomputer made up of millions of their configurable cores. The chief advantage in using DSPs is that they are very power efficient. So as “performance per watt” becomes the hot term in both the high-performance computing world and in the data center, chip companies are seeing an opportunity.

So are companies that operate their own data center. For example, Microsoft is researching the power savings associated with running some of its jobs on Intel’s low-power Atom processor.

Without a high-end server chip business to protect like Intel does, other chip companies are trying to muscle in with low-power options. Texas Instruments and Tensilica are using DSPs, while HPC company SiCortex told me last week it may broaden its market beyond supercomputing in the next year with its specially designed ASIC. But in order to take advantage of such specially designed chips, software must be adapted or new programs be built — something scientists are comfortable doing, but for which general IT specialists may not have time.

If TI truly wants to gain traction in this space, it may have to take a page from Nvidia’s book. Nvidia pushed its graphics processors into scientific computing using a software tool called CUDA, which helped people adapt their programs written for x86 machines to run on GPUs. Its efforts turned in fiscal third-quarter sales in its scientific computing division that grew by 31 percent over the same period in 2008 — even as sales in desktops and notebooks fell by 33 percent. However, those efforts — which can also reduce power consumption — are aimed at adding more speed.

Regardless, when it comes to scientific computing, and perhaps web-scale computing, scientists and data center operators seem willing to adapt to a different processor architecture if the job is big enough to merit the efforts on the software side. So, heterogeneous computing may become more mainstream.

  1. Forget it. TI is way too late and its chip badly lag. There is a lot of noise, by all sorts of players, but at the end of the day, Intel will be the winner. X86 everywhere will win.

    Share
  2. Mark Thomas Friday, March 6, 2009

    David, either you work for Intel or you’re wearing blinders. x86 is a wasteful architecture and the needless energy consumption is problematic for large data centres. A better answer is needed, and it may not come from Intel.

    Share
  3. [...] it’s pushing into data centers and high-performance computing. Other chip vendors such as Texas Instruments or Tensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs [...]

    Share
  4. [...] which it’s pushing into data centers and high-performance computing. Other chip vendors such as Texas Instruments or Tensilica, which are pushing DSPs for specialty computing, and Nvidia, which is pushing GPUs [...]

    Share
  5. [...] in several industries. Today’s announcement also underscores an emerging trend of using a broader range of processors in HPC and even data centers as the tradeoff between performance and energy becomes a bigger [...]

    Share
  6. [...] in several industries. Today’s announcement also underscores an emerging trend of using a broader range of processors in HPC and even data centers as the tradeoff between performance and energy becomes a bigger [...]

    Share
  7. [...] the concern about power efficiency in data centers, server vendors and data center operators are exploring unconventional processors and seem willing to accept lower speeds. Microsoft is testing the use of Intel’s low-power [...]

    Share
  8. [...] processors for some types of applications, while Texas Instruments is researching the use of DSPs inside servers. So ARM’s server ambitions aren’t so far-fetched, and because of its [...]

    Share
  9. [...] processors for some types of applications, while Texas Instruments is researching the use of DSPs inside [...]

    Share

Comments have been disabled for this post