Parallel processing isn’t just for supercomputers or GPUs anymore. Computer makers are throwing multiple cores at everything from servers to your printer. But the focus on horsepower misses a crucial problem associated with adding more processors. To really take advantage of them, you have to rewrite your code.As anyone who’s ever hosted a demolition party well knows, you can only throw so many workers at a problem before people start to linger at the edges, swill your alcohol and generally stop helping. You need not just manpower, but a good way to organize those workers so that someone, says, preps a drop cloth before your walls get taken out. And others prep for cleanup while the plaster is flying.
Silicon doesn’t tend toward drunken destruction, but if you’re putting the cores in place, it would be great to give them better instructions. Otherwise the promise of performance is just a promise, which is why Microsoft and Intel recently pledged $20 million to two universities trying to figure out an easy way to translate the billions of lines of code into an instruction set for multicore chips.
Others are pushing Erlang as a potential solution to parallel programming, while those in the supercomputing industry are warning of a performance drop caused by applications not keeping up with the cores. Software startup VirtualLogix is trying to use virtualization software to govern how multicore chips run applications by making the programs think they’re running on one processor.
Last week, during the launch of the iPhone, Steve Jobs told the New York Times that the next generation of the Apple OS will not focus on new features, but will instead solve the problem of writing software for multicore processors. Apple has code-named the technology Grand Central, and based it on a programming language called OpenCL. It will parallelize C programming languages for graphics processors.
Besides investing millions of research dollars into the search for a magic compiler or reviving an older language, chip vendors are coming up with stopgaps. Unfortunately these stopgaps are focused solely on their own silicon. Nvidia has released a tool called CUDA to help translate C languages into parallel instructions that can be used by Nvidia’s GPUs for scientific computing. (Apple’s OpenCL looks similar to CUDA.) And AMD also has its own effort, called Stream.
Freescale on Monday announced a set of multicore embedded processors that come with software support in the form of a simulator that ships before the chips do. As a result, users can start their development efforts and test their multicore code weeks ahead of time. “Customers are not looking for suppliers to offer them a chip and then leave them to program it themselves,” explained Steve Cole, a systems architect for Freescale. “There’s a certain amount of support and market knowledge that we need to have to help our customers.”
With all the work it takes to rewrite code, it’s no wonder everyone from startups to established companies are desperately searching for the programming equivalent of a Babel fish to solve the problem. The one that succeeds will be responsible for taking computing to its next jump in speed.