Nervana Systems, a San Diego-based startup building a specialized system for deep learning applications, has raised a $3.3 million series A round of venture capital. Draper Fisher Jurvetson led the round, which also included Allen & Co., AME Ventures and Fuel Capital. Nervana launched in April with a $600,00 seed round.
The idea behind the company is that deep learning — the advanced type of machine learning that is presently revolutionizing fields such as computer vision and text analysis — could really benefit from hardware designed specifically for the types of neural networks on which it’s based and the amount of data they often need to crunch. Indeed, the computers in play do matter: the application of GPUs to run deep learning algorithms drastically improved their performance and made them a viable option for certain tasks. GPUs are now the preferred processor type for many researchers and practitioners in the space.
Nervana isn’t talking too much about the guts of its system just yet, but it appears to be based on something other than GPUs or new types of processors like IBM’s new brain-inspired SyNAPSE. And, in fact, Nervana’s architecture includes a software component, as well. Co-founder and CEO Naveen Rao explained it to me like this is in an email:
“We’re building a fundamentally different kind of computer. Deep learning/neural nets have give us the mathematical framework to break apart an estimation problem into many nodes. In addition, [deep learning] doesn’t need very precise computations at each node. So, we can actually exploit these features and build a computer that’s much more brain-like (but NOT neuromorphic). This will solve real-world problems and allow the scale of DL to go to the next level.
Our approach to deep learning is also quite novel in that we can co-optimize the hardware and algorithms. Think of it as a vertical integration from algorithms to logic gates. This is how we can achieve such high performance and do it at much less power. We are also writing a software library to take in high level descriptions of neural nets and target them to various hardware platforms, including our own.”
It will be very interesting to see how the commercial market for deep learning plays out over the next few years and which approaches work (this is something we’ll be discussing at our upcoming artificial intelligence and deep learning meetup, as well). I have spoken with many people who legitimately believe the viability of deep learning models is one of the bigger advances in computing ever, but there are still a lot of questions over how a technique that still requires an expert touch will make its way into the mainstream.
While GPUs have been the go-to processor in these early stages, there’s new research investigating the feasibility of running deep learning algorithms on FPGAs and on systems spanning from supercomputers to scale-out cloud architectures. Microsoft recently proved the models can run effectively on surprisingly few standard CPUs, and at least one other startup trying to commercialize these deep learning is also focused on CPUs and even standard software platform such as Hadoop.
Already, there are cloud-based services popping up that, even if they run on GPUs, remove (possibly thankfully) the choice in processor and much of the model-tuning complexity from the buyer.
Nervana is certainly prescient in trying to design deep learning systems early in the game, hopefully establishing a name for itself as the de facto systems vendor in the space. But even if it succeeds technologically, its real challenge — like that of so many other specialists today — might be staving off the push for commoditization and standardization coming from all sides. Across all applications, a lot of companies are losing the appetite for specialized systems if their current ones will work just fine.