The Hybrid Memory Cube consortium, a group that includes some of the largest memory manufacturers, has released a standard specification for the DRAM technology after 17 months in development. The goal of the consortium — and the spec — is to support a new form of computer chip that weds memory and processing in a dense cube structure that packs in more memory while consuming less power.
The resulting chips should find homes in high-performance computing, networking, gaming and other applications that require fast access to data stored in memory. And they should appear in physical products by the first half of next year. Eventually, this will have applications in cloud computing and even data analysis. According to the consortium, a single HMC offers a 15x performance increase and uses 70 percent less energy per bit when compared to today’s memory.
When I covered the launch of the effort in 2011, I explained the problem this new chip would solve:
The chip industry is really good at making a CPU that does calculations faster, but it hasn’t been able to make memory chips fast and dense enough to feed the cores enough information to keep up with the CPU’s capabilities. So what chips are left with is a massively large brain that stands idle sometimes while it waits for information to come to it. That idle time burns power and reduces the overall performance of a computer — and it’s becoming a bigger deal as both power and performance are being pushed to the edge.
Since that story was written, the consortium — founded by Samsung and Micron — has grown to more than 100 companies. The members now include big name vendors and users such as ARM, IBM, Microsoft, SK Hynix and HP as well as many smaller and specialty chip firms. The details of the announcement today are about how other chips will connect with the hybrid memory cube.
The structure of the chip is very different from traditional densely-packed DRAM modules. It’s stacked, which is a common way chipmakers have tried to pack in more memory — but instead of stacking the DRAM and connecting it via interconnects on the outside of the chip, the HMC has holes through the module with nanowires connecting the memory modules.
This process, known as Through Silicon Vias or TSV is of growing interest in the industry as it slogs down the path of making 3-D chips. But anytime you change a core silicon element significantly you have to figure out a lot of things –from the hardware layer all the way up to the applications. Today’s spec details how things will shunt bits to and from the hybrid memory cube. From the release:
“The achieved specification provides an advanced, short-reach (SR) and ultra short-reach (USR) interconnection across physical layers (PHYs) for applications requiring tightly coupled or close-proximity memory support for FPGAs, ASICs and ASSPs, such as high-performance networking, and test and measurement. The next goal for the consortium is to further advance standards designed to increase data rate speeds for SR from 10, 12.5 and 15Gb/s up to 28Gb/s. Speeds for USR interconnections will be driven from 10 up to 15Gb/s. The next level of specification is projected to gain consortium agreement by the first quarter of 2014.”
In short, the HMC will support 10 gigabit per second data rates at a minimum for both when they are the same board (short reach) and when they are packed even more tightly (around two to three inches) and get faster over time. Wicked fast.