Stay on Top of Enterprise Technology Trends
Get updates impacting your industry from our GigaOm Research Community
What you are about to read is not science fiction. IBM, (s ibm) the 100-year-old company that started out making old-fashioned cash registers and “business tabulating machines” has come up with a new chip that marries our brain’s architecture with silicon guts. Like people, it learns instead of being programmed and like a good semiconductor, it’s easy to make based on today’s chip production technologies. While it might have started out as a research project seeking to develop chips that deliver mor oomph while being stingy about power consumption, today it is a radical idea that takes computing to more places and in doing so potentially unleashes new waves of innovation.
A soup-to-nuts approach.
That’s the hope anyway, although IBM isn’t alone in its efforts to rethink the way computers are built. HP (s hpq) has a different initiative aimed at creating chips that can process more data more power efficiently by changing the basic building blocks inside the chip. Other companies and labs are eyeing quantum computing and other far-fetched ideas (GigaOM Pro sub req’d). But IBM being IBM has both the money and the vision to bring an entirely new way of computing forward — in fact it has been laying the groundwork for years.
Remember these commercials where IBM can tell someone in corporate HQ where a single item is on a truck out in the middle of nowhere? These commercials are from 2005, and ever since IBM has been building the infrastructure in terms of its services business, its software and even with hardware designed to process massive amounts of information on the fly. Watson, the Jeopardy-playing computer is a wonderful example of how far IBM was willing to take the hardware. But most people there knew the hardware was never going to get to the place where IBM’s customers could not only locate one of thousands of trucks to tell it that it was lost, but to the point where the customers could measure everything about that truck from its speed to the temperature inside the containers it held and than send alerts based on those variables. And the system could do all this while consuming a kilowatt of power and in a box the size of a shoebox.
Using brainpower to solve architecture problems.
That’s where this new silicon comes in. IBM calls them neurosynaptic chips, and it’s architected in a completely different way than current semiconductors. Instead of creating silicon that has a processing core, a bus and a memory cache, IBM has taken a page from the human brain. The integrated memory is represented by synapses, computation by neurons and communication by axons. The current version is far less impressive than the human brain which has billion of neurons — this chip has 256. But the breakthrough here is not just about the new architecture but what that architecture means and where it fits in with the future of computing.
Today’s chips run into a problem called the Von Neumann bottleneck, which is when the chip cannot feed the data in the memory to the processing core fast enough. Without the data the chip idles and the incredible clock speeds we’ve built into chips are somewhat wasted. The neurosynaptic chip throws that model away and relies instead on tracking relationships between events and determining if those events lead to action. When the “neurons” on the chip fire, it sets of a binary response that the processors in each neuron evaluate. When enough feedback comes from that neuron or neurons nearby the system then “understands” where that information fits in, and the chip can make a decision to react to that series of stimuli. Fundamentally, this chip learns.
Dharmendra Modha, project leader for IBM Research, explains that most programmers write code that delivers a lot of instructions to the processor followed by a few if/then statements that describe actions. The neurosynaptic chip doesn’t need those if/then statements because it’s making those correlations itself based on how often it’s “neurons” fire off ones or zeros. This means that IBM’s new chip requires a completely different type of programming (and that it’s suited to completely different types of jobs than today’s chips).
Cognitive Computing and why it matters.
IBM calls this new computing cognitive computing and the goal is to take today’s itty-bitty neurosynaptic chip, which can today play Pong or steer a toy race car around a track, and create a machine that can combine the equivalent of 10 billion neurons all in the size of a shoebox that consumes less than a kilowatt of power. Such a machine would be cable of doing much more, although perhaps one would need multiple machines to create Modha’s vision of a sensor network in every ocean tracking ambient temperature, water turbidity and other metrics to warn folks of an upcoming storm or tsunami. Today’s chip in addition to playing Pong, might be useful if attached to a door where it could “learn when to open the door to let the cat out,” suggested Modha.
“The goal is not to replace today’s computers. It’s to really take the road less traveled and build new generation of computers with a totally new approach to problems in business and science and government,” Modha says. “If today’s computers are left brained, rational and sequential then cognitive computing is intuitive and right-brained and slow, but the two together can become the future of our civilization’s computing stack.”
That’s a big vision, but IBM’s a big company and one that has managed to influence the course of computing before. For more on the science check out the video I shot last year with Modha in his newly built lab at IBM’s Alamaden Research Lab in California.