Large scale cortical brain simulations on present day supercomputers, based on the Von-Neumann model of computation, have proved highly inefficient with respect to the ultra-high density and energy efficient processing capability of the human brain. For instance, the IBM Blue Gene supercomputer consumed 1.4 MW of power to simulate 5 seconds of brain activity of a cat. On the contrary, the human brain consumes power of the order of a few Watts. In order to harness the remarkable efficacy of the human brain in cognition and perception related tasks, the field of neuromorphic computing attempts to develop non Von-Neumann computing models inspired by the functionality of the basic building blocks, i.e. neurons and synapses in the biological brain.
The computational fabric of the brain consists of a highly interconnected structure where neurons are connected by junctions termed as synapses. Each synapse is characterized by a conductance and helps to transmit weighted signals in the form of spikes from one neuron (the “pre-neuron”) to another neuron (the “post-neuron”). It is now widely accepted that synapses are the main computational element involved in learning and cognition. The theory of Hebbian Learning postulates that the strength of synapses are modulated in accordance to the temporal relationship of the spiking patterns of the pre-neurons and post-neurons. In particular, Spike-Timing Dependent Plasticity (STDP) has emerged as one of the most popular approaches of Hebbian Learning. According to STDP, if the pre-neuron spikes before the post-neuron, the conductance of the synapse potentiates (increases), while it depresses (decreases) if the pre-neuron spikes after the post-neuron. The relative change in synaptic strength decreases exponentially with the timing difference between the pre-neuron and post-neuron spikes. The timing window during which such plastic synaptic learning occurs has been observed to be of the order ˜100 ms.
The number of synapses also outnumber the number of neurons in the mammalian cortex by a large extent. It is crucial to accommodate as many synapses as possible per neuron for efficient implementation of a neuromorphic system capable of online learning. Although there have been several attempts to emulate synaptic functionality by CMOS transistors, the area overhead and power consumption involved is quite large due to the significant mismatch between the CMOS transistors and the underlying neuroscience mechanisms. As a result, nanoscale devices that emulate the functionality of such programmable, plastic, Hebbian synapses have become a crucial requirement for such neuromorphic computing platforms. To that end, researchers have proposed several programmable devices based on phase change materials, Ag—Si memristors, and chalcogenide memristors that mimic the synaptic functionality. Neuromorphic computing architectures employing such memristive devices have been also demonstrated. However, nanoscale devices attaining the ultra-high density (1011 synapses per cm−2) and low energy consumption (˜1 pJ per synaptic event) of biological synapses have still remained elusive. Therefore, improvements are needed in the field.