9 Comments

Summary:

Things change fast in computer science, but odds are that they will change especially fast in the next few years. Much of this change centers on the shift toward parallel computing. In the short term, parallelism will take hold in massive datasets and analytics, but longer […]

joehellerstein1Things change fast in computer science, but odds are that they will change especially fast in the next few years. Much of this change centers on the shift toward parallel computing. In the short term, parallelism will take hold in massive datasets and analytics, but longer term, the shift to parallelism will impact all software, because most existing systems are ill-equipped to handle this new reality.

Like many changes in computer science, the rapid shift toward parallel computing is a function of technology trends in hardware. Most technology watchers are familiar with Moore’s Law, and the more general notion that computing performance doubles about every 18-24 months. This continues to hold for disk and RAM storage sizes, but a very different story has unfolded for CPUs in recent years, and it is changing the balance of power in computing — probably for good.

What Moore’s Law predicts, specifically, is the number of transistors that can be placed on an integrated circuit. Until recently, these extra transistors had been used to increase CPU speed. But, in recent years, limits on heat and power dissipation have prevented computer architects from continuing this trend. Basically, CPUs are not getting much faster. Instead, the extra transistors from Moore’s Law are being used to pack more CPUs into each chip.

Most computers being sold today have a single chip containing between two and eight processor “cores.” In the short term, this still seems to make our existing software go faster: one core can run operating systems utilities, another can run the currently active application, another can drive the display, and so on. But remember, Moore’s Law continues doubling every 18 months. That means your laptop in nine years will have 128 processors, and a typical corporate rack of 40-odd computers will have something in the neighborhood of 20,000 cores.

Parallel software should, in principle, take advantage not only of the hundreds of processors per machine, but of the entire rack — even an entire data center of machines. Since individual cores will not get appreciably faster, we need massively parallel software that can scale up with the increasing number of cores, or we will effectively drop off of the exponential growth curve of Moore’s Law. Unfortunately, the large majority of today’s software is written for a single processor, and there is no technique known to “auto-parallelize” these programs.

Worse yet, this is not just a legacy software problem. Programmers still find it notoriously difficult to reason about multiple, simultaneous tasks in the parallel model, which is much harder for the human brain to grasp than writing “plain old” algorithms. So this is a problem that threatens to plague even new, greenfield software projects.

There are some precedents for how to overcome this problem. Over the past 20 years, the main bright spot in parallel software development has been in high-volume data analysis. SQL has been a successful massively parallel programming language since the late 1980’s. Many legacy SQL programs parallelize naturally, and every SQL programmer continues to write inherently parallel code.

Unfortunately, SQL represents a tightly scoped (albeit critical) corner of the software industry. But in recent years, a new ecosystem of data-intensive parallel development has been growing around the MapReduce parallel programming framework, which allows programmers to write data-parallel code in familiar languages like C, Java, Python and Perl. In my next post, I’ll talk about how the lessons of SQL and the growing excitement about MapReduce may bring parallelism to a larger swath of the software market.

Joe Hellerstein is a professor of Computer Science at the University of California Berkeley and has written a white paper with more detail on this topic.

  1. map reduce / fork and join was designed to distribute problem set to multiple computers or nodes in a grid not mutli core processors ….I think using mutiple threads with thread pool would help more muttli core processing ……or just use erlang !!

    Share
  2. [...] invitations to guest blog at CCCBlog and GigaOM [...]

    Share
  3. [...] in Uncategorized The first of two invited posts at GigaOm are up.  These are not researchy, they’re intended to be informative to a broad audience. [...]

    Share
  4. Hi Joe,

    Good article and whitepaper – (and thanks for the Aster mention). There is a clear shift toward parallelism to overcome data challenges. In case you didn’t see it, there was a good article in the San Jose Mercury News last week.

    Wayne Eckerson from TDWI also did a Webcast with Aster on MapReduce , as well as authored a whitepaper which folks might find interesting.

    Thanks for shining a light on this.

    Share
  5. Just extrapolating Moore’s law give us super machines in just 10 years: http://disruptionmatters.com/2008/06/11/2018-what-laptop-will-you-use-in-ten-years/

    Share
  6. Very interesting article. I’m looking forward to further tech reports on this topic.

    One thing I’d like to point out. The hardware CAD industry is another industry that has adopted parallelism through languages like Verilog, because of their need to model hardware which is inherently parallel in nature.

    Share
  7. [...] Parallel programming in the age of big data. 2. Programming a parallel future. 3. Terracotta doesn’t wnat to kill your database, just maim it. 4. Supercomputers, Hadoop, [...]

    Share
  8. [...] BitC and Coyotos. I came into programming last year excited about my understanding that to support the trend towards parallelism we had to rework something significant on at least one of the following levels [...]

    Share

Comments have been disabled for this post