Blog Post

Five Multicore Chip Startups to Watch

Stay on Top of Enterprise Technology Trends

Get updates impacting your industry from our GigaOm Research Community
Join the Community!

As semiconductor firms get around the limitations of making individual processors faster by putting more cores onto a single chip, the mindset of today’s software developers and engineers mindset needs to adapt. For to really take advantage of multiple cores, a programmer needs to look at ways to make her code parallel, splitting jobs into different parts rather than the step-by step instructions delivered to single-core machines. There are also energy and communications issues that can constrain how far multicore can grow. Below are a list of startups that have the potential to stretch multicore processors to their very limit.

Tilera: Tilera, based in San Jose Calif., doesn’t make a software compiler, but rather a 64-core chip that can scale to thousands. It all comes down to the way the chip is designed, according to founder and CTO Anant Agarwal. Instead of the cores “talking” to one another through an on-chip bus interconnect, they’re in a mesh network where information travels faster. Agarwal has said he thinks there will be a Moore’s Law for cores that will lead to the number of cores on a chip doubling every 18 months and a data center in a desktop — a bold, energy-intensive prediction.

Interactive Supercomputing: Like Agarwal, Interactive Supercomputing of Waltham, Mass., is betting on multicore swimming into the mainstream fairly soon. The company’s development platform allows programmers to write in Python or Matlab and then paralellizes the code for them. Its vision is of 40-core or 60-core machines sitting on people’s desktops for intensive computing. If you had 60 cores and the right software on your home machine, your home videos would be much, much cooler.

Replay Solutions: This Redwood City, Calif.-based startup makes debugging software that, while not specific to a multicore environment, allows a programmer to replay exactly what happened the moment before a software crash so he can see what the problem is and fix it. The company calls it “TiVo for Software.” It’s useful because, let’s face it, it’s hard enough to follow and debug a single thread of instructions to a chip. Imagine splitting that code and following it across multiple cores. Or if the cores were in a cloud environment where the location of the software running on virtualized hardware varied. This is really just a neat startup all around.

Cilk Arts: Cilk Arts is essentially focused on extending one programming language into the multicore environment. In this case it’s C++. IBM, Intel, Nvidia and even Apple are all focused on varying ways to easily develop for multicore chips, but there is plenty of room for a small company with good tools to excel. Cilk, which is based in Burlington, Mass., uses a compiler to parallelize the code in a short amount of time without restructuring it. The first release, for x86 cores, is due out later this year.

RapidMind: This startup is also creating a platform to allow C++ programmers to make their code parallel but focuses on taking task-oriented code such as Monte Carlo stimulations and graphics rendering and parallelizes it. Waterloo, Ontario-based RapidMind works on x86 chips as well as graphics processors and IBM’s Cell processor.

image of the Cell Broadband Engine courtesy of IBM

14 Responses to “Five Multicore Chip Startups to Watch”

  1. Marco

    “If you had 60 cores and the right software on your home machine, your home videos would be much, much cooler.”

    Come on Stacey, get off your couch and take a look at what people are really going to do with such a computer. NOBODY is going to buy a $20,000 machine and write Python code to spice up family picnic movies. LAZY journalism!

  2. James and Tony: Take a look at Gedae. Gedae has been around for a long time and was developed initially to aid developers in the creation of signal programming applications for multiprocessor systems. Gedae has a niche in defense and aerospace and has been working to expand the technology to be generally applicable over the last 6 years. Most recently Gedae has been working with IBM to support programming both the Cell processor and other architectures the Blue Gene/P.

    Gedae is centered around the idea of using a high level language to specify your algorithm without defining how it will be implemented on the hardware. After the algorithm is defined the developer chooses a platform and says I want my software implemented this way for this architecture. The compiler takes all that information and creates an optimized version of the software for the chosen architecture. By maintaining the generality of the application and automatically handling the details of the multiprocessor/multicore implementation the user gets:
    – a portable algorithm
    – a very efficient implementation of that algorithm for a chosen architecture
    – with no requirement to learn specialized techniques for a particular architecture.

    I know this is self promotion, but we are a company of engineers who generally keep our heads down and let the technology speak for us. It has led to slow expansion but very devoted users. IBM compared Gedae to the majority of the tools mentioned in this article and Gedae came out the clear winner. Most of these companies get a lot of press because of VC money but most also have little substance behind the technology. Many of the concepts these guys are pushing were tried and failed in the A and D market by other companies long ago.

  3. Tony said:”My cynical prediction: the companies that survive will be the ones the find a niche first, and then see if they can expand it”

    Yeah I definitely agree. There is no single solution to the sequential-parallel transition. We should start looking at a dependency based model. Datarush, a multi-core java library, is also one of those new companies to watch. They use a flow based approach which emphasizes dependencies. A great niche solution for problems easily viewed from a data perspective.

  4. Agreed that there will be more than one parallel programming model. In fact, numerous ones have been available, for some time:

    That said, it’s reasonable to expect the emergence of a de facto standard within each segment. For example, Cilk Arts is betting that for mainstream C++ programmers who want to maintain the serial semantics of their existing apps, Cilk++ will deliver a winning solution.

  5. My cynical prediction: the companies that survive will be the ones the find a niche first, and then see if they can expand it. Trying to take on Intel and ARM head on won’t work. Niches include video surveillance (Cradle Technology, which went from trying to sell chips like Tilera to focusing on a specific market) and cell phone base stations (which currently rely on gangs of DSP’s and FPGA’s).

    I also suspect that there won’t be one dominant parallel programming architecture. And, of course, meshes have been around a long time (e.g. Transputer), although IIRC Tilera claims to have better software.

    FPGA’s are also in the mix. If you can afford the license fees, National Instruments has software to program FPGA’s using LabView. There are C to FPGA compilers.