Technological Field
The present disclosure relates to, inter alia, machine learning and training of artificial neural networks.
Background
Spiking neural networks may be utilized in a variety of applications such as, for example, image processing, object recognition, classification, robotics, and/or other. Such networks may comprise multiple nodes (e.g., units, neurons) interconnected with one another via, e.g., synapses (doublets, connections).
As used herein “back propagation” is used without limitation as an abbreviation for “backward propagation of errors” which is a method commonly used for training artificial neural networks. As a brief aside, back propagation is characterized as a supervised learning method, and is a generalization of the so-called “delta rule” which is a form of gradient descent learning characterized by manipulation of weights within an array (or “network layer”) consisting of a single perceptron layer. The classical perceptron generates a binary output one (1) when an input value scaled by a weight (w) and a bias (b) exceed 0; otherwise the perceptron generates a binary output zero (0).
Using a target (desired) output and sensory input, an artificial neural network composed of multiple network layers that are sequentially coupled may be configured to, e.g., identify an object (e.g., a specific dog) from a series of images (e.g., pictures of many different dogs). Back propagation algorithms communicate real valued error values from one layer of the network to a prior network layer. It may be of benefit to train spiking neural networks using back propagation methodologies.