Existing neural networks are typically based on a single interpretation of Hebbian learning. This basic, Hebbian concept is often stated as “Neurons that fire together wire together”. The defacto interpretation is that wiring together is effected via the synapse that connects the two neurons together. The strength of the connecting synapse is modified or weighted to reflect the importance/probability of the presynaptic neuron firing concurrently with the postsynaptic neuron, or vice versa.
Using the concept, neural networks have been developed that associate a number of input neurons to a number of output neurons via synapses. The input neurons define the input states; and the output neurons define the desired output states.
Thus nearly all existing neural networks are based on the concept of three layers: an input neuron layer, a hidden neuron layer, and an output neuron layer. FIG. 1 and FIG. 2 are illustrations of existing neural networks.
Training of such neural networks is accomplished, in its most basic form, by applying a specific input state to all the input neurons, selecting a specific output neuron to represent that input state, and adjusting the synaptic strengths or weights in the hidden layer. That is, training is conducted assuming knowledge of the desired output. After training has been completed, the application of different input states will result in different output neurons being activated with different levels of confidence. Thus recognition of an input event depends on how close the original training states match the current input state.
Such neural networks typically require extensive, repetitive training with hundreds or thousands of different input states, depending on the number of desired output neurons and the accuracy of the desired result. This results in practical networks of the order of only 10,000 input and output neurons with as many as 10 million interconnecting synapses or weights representing synapses (current existing neural networks are very small in size as compared to the capacity of the human brain which has 10.sup.12 neurons, and 10.sup.16 synaptic connections).
Furthermore, existing networks are trained on the basis of generating predefined output neurons, and can subsequently recognize inputs that closely resemble the training sets used for input. Existing neural networks are not capable of independent learning as they are trained using prior assumptions—the desired goals are represented by the output neurons. Existing neural networks are not capable of expressing or recollecting an input state based on the stimulus of any output neuron in the output layer.
Existing neural networks are trained on the basis of applying independent input states, to the network, in which the order of training is typically insignificant. On completion of extensive, repetitive training, the output neurons are not significantly dependent on the order in which input states are applied to the network. Existing neural networks provide outputs that are based entirely on the current input state. The order in which input states are applied has no bearing on the network's ability to recognize them.
Existing neural networks may have some or all of the following shortcomings:
1. they require prior training, based on predetermined or desired output goals—they do not learn;
2. they can only recognize input states (objects) similar to the input states for which they have been trained;
3. they are highly computational, and therefore slow;
4. they are computationally restricted to represent only a relatively small number of neurons;
6. they need retraining if they are to recognize different objects;
7. they cannot express or recall an input object by applying a stimulus to the output neurons;
8. they are based on concurrent stimuli of all input neurons;
9. they are not creative and they cannot express or recollect events; they can only identify/recognize events for which they have been trained;
10. they assume neurons that fire concurrently or in quick succession, are linked synaptically but do not distinguish one from the other or the order of neuron firing; and
11. each hidden layer neuron can receive inputs from multiple input neurons concurrently.