Engineering truly intelligent machines is one of the more exciting and challenging frontiers of modern hardware design. The field of machine learning (ML) has achieved remarkable progress in the mathematical formulation of performance bounds and algorithms to many classes of problems such as pattern recognition, natural language processing, clustering, and time series prediction.
Recently, there is an increasing involvement of ML algorithms in many of the modern life aspects, and specifically, in mobile devices and cloud computing. This generates an increasing demand for intensive ML calculations to be performed fast using low area and low power consumption. Therefore, new methodologies must be developed to cope with such demands.
Many ML algorithms require updating big matrices of values termed synaptic weights. The power of ML stems from the rules used for updating the weights. These weights are usually local, in the sense they depend only on information available at the site of the updated synapse, and the update procedure occurs continuously during the normal operation of the system. Such rules go under the general name of Hebbian, named after the physiologist Donald Hebb, who first suggested the distributed mechanism underlying neural function.
Conventional implementation of ML algorithms in hardware utilizes standard digital memory arrays to store these matrices, physically separated from the computing circuits. This architecture heavily limits the usability and scalability of ML hardware and provides small advantage over software-based implementation on general purpose hardware.
There is a growing need to provide hardware systems that are compact, can execute general ML algorithms and are not limited to narrow, specific purposes such as Synaptic Time Dependent Plasticity (STDP), like learning rules.