To share or not to share memory, that’s the question that sparked the most heated debate today in a panel at the GigaOM Network’s Structure conference on next-gen architectures for the cloud.
Anant Agarwal, co-founder and CTO of startup Tilera (maker of a massively multicore chip), argued that the key to optimizing a general-purpose processor is “all about sharing.”
Gary Lauterbach, CTO of SeaMicro, disagreed. “I like sharing, but let me compare it to an apartment building,” where you wouldn’t want to have dozens of rooms served by only one bathroom. “You run into scalability problems with multicore,” he said. Already, Lauterbach observed, “more workloads than we thought are transitioning to the scale-out model, a share-nothing architecture. There’s massive power available in that paradigm.”
While Tilera has built in support for many of the popular languages used for building web services applications such as PHP, Python, Ruby, Java, Linux, Hadoop, MySQL, and others, and its machine will work better for massively parallel processing jobs such as transcoding or certain types of financial calculations, Lauterbach and Agarwal sharply disagreed over the degree of difficulty for programmers trying to work with the shared model that Tilera advocates.
Where the panelists generally agreed was that cranking out more code at higher speeds is a top priority for the web. As James Watters, senior manager for vCloud Solutions at VMware, put it, people are choosing engineering productivity over high-performance code. He pointed to Facebook as an example, saying the market pushed it to speed development in its early days by using the programming language PHP rather than the object-oriented language C++. Bottom line, said Ian Ferguson, director of enterprise and embedded solutions for ARM, “You need a standard architecture with massive amounts of software being developed for it.”