Site icon Nuventure Blog

What are Neuromorphic Hardware?

In 1965 CEO and Co-Founder of Intel claimed that the number of transistors in an integrated circuit will double every two years and that the cost will be halved. And since 1975, that projection has held true. To quote an example I’ve read some time ago, if automobile technology has grown like that, a Rolls Royce should be getting 2000 miles a gallon, and it should be cheaper to throw it away than to park it. But now it looks like Moore’s Law may be in its final days(or years). It’s simply becoming impossible to cram more transistors into a chip. 

But with advancements and the widespread adoption of AI in just about every nook and corner, the demand for even faster computing is increasing. And AI algorithms generally tend to consume a lot of computational resources. Cloud computing generally fills this gap, but cloud computing for AI has its drawbacks. 

Recent trends suggest a shift towards edge computing for AI, driven primarily by privacy concerns for consumer devices. Nobody likes it if their private conversations are sent to distant servers for analysis. In the latest Pixel devices, Google made it particularly clear that the transcriptions on the recorder app are done on the device itself, using a trimmed down neural network. 

Edge computing also decreases the latency in the results, as well as the costs associated with cloud computing. Therefore at least some of the less complex AI tasks are carried out on the device itself. Smart home assistants like Echo or Google Home are good examples; the wake word detection(“Hey Alexa”. Or “Ok Google”) is done on the device itself but the rest of the conversation is analyzed in the cloud. 

Autonomous cars are also another great example of AI on edge. The latency associated with cloud computing makes it difficult to run autonomous cars using cloud computing.  But obviously, running AI on edge requires some serious hardware. 

These use cases make neuromorphic hardware relevant in the current scenario. Neuromorphic hardware is right up there with quantum computing, carbon nanotubes among the technologies being explored to overcome the eventual death of Moore’s law. 

What is a neuromorphic hardware?

Neuromorphic hardware or neuromorphic chips are hardware components that mimic the human brain. The idea itself is not new. The term was used to describe digital/analogue circuits that mimic the human brain by Carver Mead in 1980. 

Even when computers became remarkably fast, scientists and engineers were fascinated by the brute computing power of the human brain. The human brain is remarkably small, highly energy-efficient, and simply able to create new physical structures to handle new tasks.

Brain on a chip, or a chip that is a brain?

While digital neural networks have undergone a revolution this past decade, such algorithms were run using conventional computational chips, and the functioning of a human brain was only emulated by them. They were run on transistors on a chip. Neuromorphic chip aims to create artificial neurons and synapses on silicon chips to run these algorithms. Or maybe not on silicon. With this, the idea is to reduce the power consumption as well as the size of computers needed to run these algorithms. Basically, things are a lot easier when an algorithm mimicking the brain is run on hardware mimicking the human brain. 

Plasticity of the brain is something else that researchers are trying to mimic in neuromorphic chips. Neurons in the brain are capable of forming new connections in order to learn new tasks. If this can be replicated in chips, we can have chips that function like they were designed for a specific task, instead of a general-purpose chip. A chip, that can form new physical structures or form new connections will act like the human brain. 

Spiking Neural Networks

Spiking neural networks(SNNs) are considered to be the third generation of neural networks, and more closely mimics biological neural networks. These networks are predicted to take full advantage of neuromorphic hardware, as it is difficult to emulate them on conventional hardware. 

While the second generation of neural networks has made AI possible in many applications, they don’t exactly mimic the biological neurons. They are fully connected, as in, all the neurons in a layer are connected to the layer before them and after them. But in an SNN, the neurons are only connected to the ones near them, and a neuron is activated only when the membrane potential reaches the activation potential. 

Since the SNNs are only locally connected, it brings in a spatial aspect, processing parts of the input separately. They are also capable of encoding temporal information, due to the nature of their firing. But efficient training methods for SNNs are not available as of now. And the output is difficult to interpret. Therefore the applications for SNNs are currently limited, but advances will certainly make algorithms much more efficient.

Current projects

TrueNorth is a neuromorphic chip from IBM capable of 46 billion synaptic operations. The chip has one million individually programmable neurons and 256 million synapses. And the whole thing just takes 70 milliwatts and fits in the palm of your hand. 

Loihi is an initiative from Intel and has 130,000 neurons and 139 million synapses. The chip was fabricated using Intel’s 14nm process technology. The chip was design with the intention of implementing spiking neural networks. 

Advantages

Neuromorphic chips are faster and smaller than their conventional counterparts. And they consume a lot less power. This is partly because only the necessary neurons are fired and the rest are inactive. The human brain uses a lot less energy than almost any other computer and researchers are hoping to replicate this in neuromorphic chips. 

Something you may have noticed in human beings is that we tend to learn from a lot less data. We don’t need to be trained on all the fonts available to be able to read a text, and we are able to understand different accents, even the ones we’ve never heard before, as long as we know the language. This ability to learn from smaller datasets may be possible with Neuromorphic chips. 

Designing neuromorphic chips require an interdisciplinary team, combining expertise in physics, computer science, and biology. They appear to be the next big thing in artificial intelligence, and it won’t be a surprise if neuromorphic chips become the standard in implementing AI systems. 

Exit mobile version