This invention relates generally to network simulation, and more particularly to a system for address-event-representation network simulation.
Network simulations can be used for various modeling applications. Artificial neural networks are one example of a type of network simulation where simple nodes of neurons are connected together to form a network that can exhibit complex global behavior. Neural networks typically operate collectively in parallel.
One type of neural network that increases the level of biological realism of neural simulation, and is also advantageous for hardware implementation, is a spiking neural network (SNN). An SNN can include many processing nodes and interconnections, which in general have specified time delays and modifiable weights. A “spike” or pulse, characterized solely by its time of emission by a source node, is received by its target node and effects changes in the internal state of the targeted node and/or the weight of the interconnection. The targeted node may in turn emit a spike in response to the received spike. The effect of the received spike is related to the weight of the connection along which it arrives, and the recent past history of spikes received by the targeted node. The SNN may adapt over time to perform a desired neural-network function, such as pattern recognition, function approximation, prediction, or control.
Neural network simulation is very slow on general-purpose computers, including those that use parallel processing. For an SNN with N nodes and KN connections, with each node emitting a spike during a fraction f of simulated time steps (sts), the spike being sent to each of its K target nodes, and requiring S computational operations for each received spike at a target node, the computational load is KNfS operations/sts. For typical values of K=100, N=1e6 (a million), f=0.01, and S=30, the resulting KNfS is 3e7 operations/sts. Running on a single processor of a general-purpose computer at 2e9 operations/sec, the network would execute only 70 sts/sec. One run of a typical neural-net algorithm may require training (weight adaptation) on each of 7e4 input patterns, each presented 1e3 times, for a run time of 1e6 sec or about 2 weeks.