8 Comments

Summary:

As a new brain-like computing architecture out of Stanford demonstrates, we’re on the cusp of powerful, but fundamentally different ways of doing computing. However, whether they’re embedded in devices or packed together in supercomputers, programming these new types of systems will take some re-education.

A team of Stanford scientists has created a circuit board, dubbed “NeuroGrid,” consisting of 16 computing cores that simulate more than 1 million neurons and billion of synapses. They think it could be mass produced for about $400 per board, meaning it would be economically feasible to embed the boards into everything from robots to artificial limbs in order to speed up their computing cycles while significantly reducing their power consumption.

But even if that’s possible, there would still be one big problem: Right now, NeuroGrid requires, essentially, a neuroscientist in order to program it.

It’s arguably a bigger problem than the cost of production (although the $40,000 price tag for the prototype version would be very prohibitive), because processor architectures are nothing without people to build applications for them. We’re already used to processors and microchips embedded in many of the things we use, but most are slow, weak and power-hungry compared to some of the new designs. A future that includes powerful artificial intelligence and useful ubiquitous computing will depend on tools that make it as easy to program thousands of cores, millions of neurons or even collections of qubits as it is to build applications for standard desktop processors.

That’s why NeuroGrid’s creators are working on compilers that would make it easier for people without an understanding of brain functions to write applications for the architecture. It’s also why D-Wave Systems is working on compilers, and even APIs, to take to specialization out of quantum computing. Nvidia has been pushing its CUDA framework for programming graphics processing units for years, and is now stepping up its efforts as machine learning and advanced AI are really beginning to take off.

Cheap computers that perform complex calculations at the speed of thought, or offload part of task to a cloud-based system that can, could make the internet of things a lot more compelling by making the devices a lot more useful. They could make self-driving cars even better than they already are. They could make computer vision, speech recognition and other applications of machine learning part of our day-to-day interactions with more than just our smartphones’ personal assistant apps.

All the better if we don’t have to worry about charging something every few hours. In some cases, such as medical devices, energy efficiency will be a critical element.

But without a deep pool of programming know-how, innovation in these areas will be confined to the imaginations of the relatively few people who know how to optimize software for these new types of computers. Or worse, it could be that really powerful new technologies never get off the ground commercially because they just can’t attract the developer bases they require. The supercomputing world is already experiencing this scenario to some degree, as systems makers rush to build exascale systems comprised of so many cores pretty much nobody could write applications for them.

It’s not a crisis just yet — with the exception of GPUs, few if any quantum computers or brain-like architectures are being sold commercially — but it’s something worth thinking about to avoid putting the cart in front of the horse when it comes to the future of robotics, wearable technology and smart everything. They’re great in theory, but fully realizing their potential might mean fully embracing some new foundations for powering them.

Feature image courtesy of Shutterstock user Sebastian Kau.

  1. Thanks for the coverage – I’ve updated my post here to include Stanford (previously was only a mention at the end as they hadn’t released much publicly for a while) – http://hardyproduct.blogspot.sg/2014/04/advances-in-cognitive-computing-hardware.html

    Reply Share
    1. Derrick Harris Monday, April 28, 2014

      Informative post. I hadn’t heard about IBM’s “corelet programming” method. All those projects also beg the question of which approach will win out, or how many variants the market will support.

      Reply Share
  2. Reblogged this on Carpet Bomberz Inc. and commented:
    Neural network on the move now that multiple cpu cores are possible on a much smaller PC card. I’ll be keeping an eye on “NeuroGrid”.

    Reply Share
  3. Reblogged this on Paul Jacobson's blog and commented:
    This must change as these computers become cheaper to produce and are demonstrably more powerful and energy efficient. The coding skills will come as these new computers find everyday uses.

    Reply Share
  4. Reblogged this on WordOfStu and commented:
    It looks like we’re nearly at the Cockroach level, I’m really excited about where this kind of “synapse” density and simulation can go.

    Reply Share
  5. It’s always been my belief that neural-net based computers shouldn’t be programmed; they should be taught, just like humans, since the structure of these devices will be very similar to that of the brain.

    Reply Share
    1. I agree with this statement. With ANN, the quantity of the data input into it and subsequent tuning of weights between neurons with learning algorithms determines the quality of the model. So in most cases more data means better model.

      Reply Share
  6. Ian Elliott Thursday, May 1, 2014

    We are close to the point when computers will begin programming themselves in increasingly complex ways which will lock us out of the process eventually. We are building our own supplanters.

    Reply Share