Fed up with the limitations of current computer chips and their related intellectual property, a team of researchers at the University of California, Berkeley, is pushing an open source alternative. The RISC-V instruction set architecture was originally developed at the university to help teach computer architecture to students, but now its creators want to push it into the mainstream to help propel emerging markets such as cloud computing and the internet of things.
One of the researchers leading the charge behind RISC-V is David Patterson, the project’s creator and also the creator of the original RISC instruction set in the 1980s. He views the issue as one centered around innovation. Popular chip architectures historically have been locked down behind strict licensing rules by companies such as Intel, ARM and IBM (although IBM has opened this up a bit for industry partners with its OpenPower foundation). Even for companies that can afford licenses, he argues, the instruction sets they receive can be complex and bloated, requiring a fair amount of effort to shape around the desired outcome.
Many of today’s processor architectures (including IBM Power, ARM and MIPS) are actually based on RISC, Patterson noted, but companies have been able to reap the rewards of the patent system by protecting what he calls “quirks.” They’re components that don’t fundamentally set one architecture apart from another, but are technically different and are required for the operation of the instruction set. ARM — whose designs powers smartphone chips from Qualcomm, Apple, Marvell and others, and server chips inside Amazon and Google data centers — is probably the most popular example right now.
Still, all might be well and good if you’re a big company that can afford to buy licenses from big chip vendors, which invest a lot of time and money into developing some very good technologies. But Patterson appears to be looking out for the little guy — small companies or researchers that want to develop their own chips for their own specialized applications, but don’t have deep pockets. That requires being able to experiment with the underlying instruction set, experiment with chip designs and share that work openly without fearing a violation of license terms.
“For that to happen,” Patterson said, “you have to have an unrestricted instruction set.”
Indeed, there are other open source instruction sets out there, including OpenRISC and SPARC V8, as well as industry foundations such as IBM’s OpenPower and the MIPS-based Prpl. It’s early to tell whether the latter groups will find much traction, especially among small firms, individuals or universities, and Patterson said the open source community never really took to OpenRISC and SPARC V8.
It was just several months ago that Patterson and his colleagues realized they should try to push RISC-V outside the classroom, as people “desperate enough or interested enough” approached them asking if there was a way to get their hands on it. Already, UC Berkeley has created several cores based on RISC-V, and there are multiple other projects underway at other institutions. Patterson and his colleague Krste Asanović recently published a technical paper laying out the case for RISC-V, and some of its specifications, in more detail.
Patterson says RISC-V is more capable in many ways and more efficient (even against some proprietary designs), and is ideal for this moment in time because it has a small code base and other features that make it more suitable for the system-on-a-chip designs that dominate today’s computing world, largely thanks to ARM. As the demands of connected devices evolve, kits such as Raspberry Pi mature and scale-out cloud architectures grow, a thriving RISC-V community should be able to design chips that evolve along with them.
“We think it will make sense to design custom hardware for cloud computing that will be more efficient than standard processors,” Patterson said in response to my question about how RISC-V might fit into existing open source projects such as the Facebook-created Open Compute Project. He also noted the work the UC Berkeley AMPLab is doing around data processing and distributed systems, suggesting an easily customizable chip architecture could also help solve problems around fault tolerance and the possibility that 64-bits of addressable memory space will no longer be enough in some instances.
“I think this will happen,” Patterson said. “Hardware will get more specialized for the client and more specialized for the cloud.