A team of Stanford scientists has created a circuit board, dubbed “NeuroGrid,” consisting of 16 computing cores that simulate more than 1 million neurons and billion of synapses. They think it could be mass produced for about $400 per board, meaning it would be economically feasible to embed the boards into everything from robots to artificial limbs in order to speed up their computing cycles while significantly reducing their power consumption.
But even if that’s possible, there would still be one big problem: Right now, NeuroGrid requires, essentially, a neuroscientist in order to program it.
It’s arguably a bigger problem than the cost of production (although the $40,000 price tag for the prototype version would be very prohibitive), because processor architectures are nothing without people to build applications for them. We’re already used to processors and microchips embedded in many of the things we use, but most are slow, weak and power-hungry compared to some of the new designs. A future that includes powerful artificial intelligence and useful ubiquitous computing will depend on tools that make it as easy to program thousands of cores, millions of neurons or even collections of qubits as it is to build applications for standard desktop processors.
That’s why NeuroGrid’s creators are working on compilers that would make it easier for people without an understanding of brain functions to write applications for the architecture. It’s also why D-Wave Systems is working on compilers, and even APIs, to take to specialization out of quantum computing. Nvidia has been pushing its CUDA framework for programming graphics processing units for years, and is now stepping up its efforts as machine learning and advanced AI are really beginning to take off.
Cheap computers that perform complex calculations at the speed of thought, or offload part of task to a cloud-based system that can, could make the internet of things a lot more compelling by making the devices a lot more useful. They could make self-driving cars even better than they already are. They could make computer vision, speech recognition and other applications of machine learning part of our day-to-day interactions with more than just our smartphones’ personal assistant apps.
All the better if we don’t have to worry about charging something every few hours. In some cases, such as medical devices, energy efficiency will be a critical element.
But without a deep pool of programming know-how, innovation in these areas will be confined to the imaginations of the relatively few people who know how to optimize software for these new types of computers. Or worse, it could be that really powerful new technologies never get off the ground commercially because they just can’t attract the developer bases they require. The supercomputing world is already experiencing this scenario to some degree, as systems makers rush to build exascale systems comprised of so many cores pretty much nobody could write applications for them.
It’s not a crisis just yet — with the exception of GPUs, few if any quantum computers or brain-like architectures are being sold commercially — but it’s something worth thinking about to avoid putting the cart in front of the horse when it comes to the future of robotics, wearable technology and smart everything. They’re great in theory, but fully realizing their potential might mean fully embracing some new foundations for powering them.
Feature image courtesy of Shutterstock user Sebastian Kau.