Things change fast in computer science, but odds are that they will change especially fast in the next few years. Much of this change centers on the shift toward parallel computing. In the short term, parallelism will take hold in massive datasets and analytics, but longer term, the shift to parallelism will impact all software, because most existing systems are ill-equipped to handle this new reality.
Like many changes in computer science, the rapid shift toward parallel computing is a function of technology trends in hardware. Most technology watchers are familiar with Moore’s Law, and the more general notion that computing performance doubles about every 18-24 months. This continues to hold for disk and RAM storage sizes, but a very different story has unfolded for CPUs in recent years, and it is changing the balance of power in computing — probably for good.
What Moore’s Law predicts, specifically, is the number of transistors that can be placed on an integrated circuit. Until recently, these extra transistors had been used to increase CPU speed. But, in recent years, limits on heat and power dissipation have prevented computer architects from continuing this trend. Basically, CPUs are not getting much faster. Instead, the extra transistors from Moore’s Law are being used to pack more CPUs into each chip.
Most computers being sold today have a single chip containing between two and eight processor “cores.” In the short term, this still seems to make our existing software go faster: one core can run operating systems utilities, another can run the currently active application, another can drive the display, and so on. But remember, Moore’s Law continues doubling every 18 months. That means your laptop in nine years will have 128 processors, and a typical corporate rack of 40-odd computers will have something in the neighborhood of 20,000 cores.
Parallel software should, in principle, take advantage not only of the hundreds of processors per machine, but of the entire rack — even an entire data center of machines. Since individual cores will not get appreciably faster, we need massively parallel software that can scale up with the increasing number of cores, or we will effectively drop off of the exponential growth curve of Moore’s Law. Unfortunately, the large majority of today’s software is written for a single processor, and there is no technique known to “auto-parallelize” these programs.
Worse yet, this is not just a legacy software problem. Programmers still find it notoriously difficult to reason about multiple, simultaneous tasks in the parallel model, which is much harder for the human brain to grasp than writing “plain old” algorithms. So this is a problem that threatens to plague even new, greenfield software projects.
There are some precedents for how to overcome this problem. Over the past 20 years, the main bright spot in parallel software development has been in high-volume data analysis. SQL has been a successful massively parallel programming language since the late 1980′s. Many legacy SQL programs parallelize naturally, and every SQL programmer continues to write inherently parallel code.
Unfortunately, SQL represents a tightly scoped (albeit critical) corner of the software industry. But in recent years, a new ecosystem of data-intensive parallel development has been growing around the MapReduce parallel programming framework, which allows programmers to write data-parallel code in familiar languages like C, Java, Python and Perl. In my next post, I’ll talk about how the lessons of SQL and the growing excitement about MapReduce may bring parallelism to a larger swath of the software market.