TI Wants to Use DSPs for Low-power Computing

hdr_ti_logoTexas Instruments is looking to hop on the trend of using non x86 processors in the data center, according to Kathy Brown, general manager of the company’s wireless base station infrastructure business. Last night over dinner, Brown said the wireless chip powerhouse was trying to build a software framework that would enable researchers to run Linux on its high-end digital signal processing chips (DSP) used for scientific computing.

The idea of using DSPs is not new. Tensilica, a DSP core company, is working with researchers at the Department of Energy’s Lawrence Berkeley National Lab to build a supercomputer made up of millions of their configurable cores. The chief advantage in using DSPs is that they are very power efficient. So as “performance per watt” becomes the hot term in both the high-performance computing world and in the data center, chip companies are seeing an opportunity.

So are companies that operate their own data center. For example, Microsoft is researching the power savings associated with running some of its jobs on Intel’s low-power Atom processor.

Without a high-end server chip business to protect like Intel does, other chip companies are trying to muscle in with low-power options. Texas Instruments and Tensilica are using DSPs, while HPC company SiCortex told me last week it may broaden its market beyond supercomputing in the next year with its specially designed ASIC. But in order to take advantage of such specially designed chips, software must be adapted or new programs be built — something scientists are comfortable doing, but for which general IT specialists may not have time.

If TI truly wants to gain traction in this space, it may have to take a page from Nvidia’s book. Nvidia pushed its graphics processors into scientific computing using a software tool called CUDA, which helped people adapt their programs written for x86 machines to run on GPUs. Its efforts turned in fiscal third-quarter sales in its scientific computing division that grew by 31 percent over the same period in 2008 — even as sales in desktops and notebooks fell by 33 percent. However, those efforts — which can also reduce power consumption — are aimed at adding more speed.

Regardless, when it comes to scientific computing, and perhaps web-scale computing, scientists and data center operators seem willing to adapt to a different processor architecture if the job is big enough to merit the efforts on the software side. So, heterogeneous computing may become more mainstream.

loading

Comments have been disabled for this post