The incredible graphics on a new iPad? That comes from a multicore chip. The high-end servers powering the back ends of e-commerce sites? That’s six to eight cores crammed onto a single chip.
But more cores comes with a cost: it slows things down. That’s because cores still communicate like there’s only a couple of them on a chip and work through an interface called a bus. The bus delivers a connection equivalent to telling your friends about a pending engagement via the telephone instead of Twitter or Facebook — each core has to phone the next to tell it what is going on and what needs to happen, creating a significant bottleneck.
At about 10 cores the effort of building and running a bus network becomes inefficient from both a power usage and a speed perspective. And since the thinking in the industry is that chips will eventually need to have thousands of cores to deliver the computing we will demand, something has to change.
So Li-Shiuan Peh, an associate professor of electrical engineering and computer science at MIT has come up with cool technology that models the networking layout of the web, with routers and packets and adapted it for on-chip communication. Peh thinks bundling the information cores into “packets,” and giving each core its own router is the next on-chip communications model.
However, engineers are worried about cost and complexity. A packet of data traveling from one core to another has to stop at every router in between and can also get delayed if it stops at a router that’s already handling a packet.
So Peh and her team have spent the last ten years thinking about the problem of putting packets and routers on-chip and have come up with ways to make implementing such technology feasible in the real world. Peh says in an MIT release on the science: “The biggest problem, I think, is that in industry right now, people don’t know how to build these networks, because it has been buses for decades.”