Comments Off

Summary:

What happens when you place the equivalent of 1024 neurons in parallel on a chip? Well, you get a new form of computing for cloud computing and sensor networks as well as toys that can recognize cue cards, better artificial intelligence and pattern recognition.

brainstorm

Imagine a chip that worked like our brains and could do everything from make our cloud services better, give sensor networks sensibility, make toys interact with the kids and give cameras the power to track, count and spot people in a crowded mall. Meet the CM1K, the silicon marketed by Folsom, Calif.-based start-up CogniMem, that is going to give artificial intelligence, sensor networks and lower-power predictive computing a big boost.

CogniMem, a blending of Cognitive Memory, was created this year to market the CM1K and will sell it to you in units of 1,000 for $80 a chip. The chip was developed based on technology from IBM. It’s not the cheapest chunk of silicon out there, nor is it really ready for most high-end applications. But the promise is that its way of creating chips that can handle massively parallel memory functions could lead to breakthroughs in the way computing is done, making it greener and more efficient for certain kinds of problems.

Much like IBM, CogniMem is trying to build out chips that model the human brain, and has a license from IBM to make its products. “Based on multiple generations of IBM-patented ZISC technology, we have perfected this approach for practical commercial use, providing unmatched performance at low power, and made it available now.” said Bruce McCormick, co-founder, president and CEO of CogniMem, noted in the press release.

The CM1K contains 1024 neurons, which can be daisy-chained together to make giant systems. The company says the chips are good for massively parallel jobs such as determining closest vectors in video searching, real-time surveillance and analytics, data mining, fingerprint matching, hyperspectral image analysis, financial services, weather forecasting, and a wide range of scientific computational tasks.

Here’s where it gets technical

The reason this chip is so potentially exciting is because of an issue that rapid compute has when it comes to getting information from the memory on the chip to the processor. Processors run so fast they often idle while waiting for data from memory, and as you add more cores, the process can bog down even more. But because each neuron functions independently as both the holder of information and a processor, it doesn’t have this problem, so it can take certain kinds of jobs and perform them much faster and generally at lower power. The CM1K runs at half a watt.

Late last week, CogniMems announced a way to group its chips called the CogniBlox system. From the release:

It’s composed of four CM1K chips or a total of 4,096 cognitive memory processing elements per board in a trainable 3-layer network, each having 256 programmable 1-byte connections to the input. Systems of 1 million elements can be configured allowing for 256 million connections every 10 microseconds with a typical power consumption of 500 watts and 0.13 petaops of performance.

From the point of the average consumer, this type of advance in silicon may not mean much, but it opens up an entirely new world of computing that relies on sensors and compute everywhere. Right now, we are tied to our machines, but as we evolve new ways of helping chips process information in a more distributed system that uses lower power, we can create entirely new applications and move a step closer to taking the computers out of computing.

Comments have been disabled for this post