The present invention pertains to an optical interconnector and an optical learning system or network incorporating the interconnector therein. More particularly, the present invention pertains to an optical interconnector including an interconnection beam projection means which projects a distribution of light beams to interconnect a detecting means with an input means, and a weighting means, located in an optical path between the projection means and either the input or the detecting means, which controls a parameter of each of the interconnecting beams to set the interconnection strength or weight thereof. The optical learning system according to the invention incorporates the interconnector to provide a highly interconnected neural network having learning capability.
The present invention relates to an optical interconnector and an optical learning system incorporating such an interconnector to provide what has become known in the art as a "learning neural network". By way of background, generally a neural network comprises devices that simulate the responses of biological neurons. A simplistic model for a neuron N.sub.0 is shown in FIG. 1 to receive three inputs X.sub.1, X.sub.2, and X.sub.3 at a device which sums the inputs according to the simple equation S=X.sub.1 +X.sub.2 +X.sub.3. Positive X's may be defined as excitatory and tend to make the model neuron "fire", that is provide a nonzero output. Negative X's, defined as inhibitory, tend to prevent the model neuron from firing. A nonlinear operator changes an output signal from the output S to the summing device into a new signal according to a nonlinear, threshold response curve. A low input signal to the nonlinear operator, that is a signal below some threshold, S.sub.0, results in a zero output at the nonlinear operator. A high input signal gives a fixed axial output. An intermediate input results in an intermediate output.
Output S' from the nonlinear operator is applied to still other neurons after multiplication by a weighting factor W by a distributor. All signals W.sub.1 S', W.sub.2 S', and W.sub.3 S' are proportional to S'. The weighting factor W controls the strength or weight of the connections between neuron N.sub.0 and the summing elements of the three other neurons N.sub.1, N.sub.2, and N.sub.3 shown in FIG. 1. If, for example, the weighting factor W.sub.1 is small, the interconnection between neurons N.sub.0 and N.sub.1 is said to be weak and thus the signal S.sub.1 from neuron N.sub.0 to neuron N.sub.1 is attenuated. Conversely, if the weighting factor is large the signal S.sub.1 is amplified and the interconnection between the neurons is said to be strong. It is seen that the information, memory, and problem solving methods characteristic of a neural network are determined by the interconnections in network, that is what is interconnected to what and with what strength.
Turning to the prior art, one recognizes U.S. Pat. No. 4,660,166 to Hopfield as disclosing an earlier network which electronically simulates neural activity to provide a system capable of retrieving particular information from a system memory in response to an interrogation thereof. The patentee describes such a retrieval system as an associative memory, that is a memory that provides an output which is particularly associated with a particular input applied to the system. In the Hopfield device an interconnected network of electronic amplifiers provides the "neurons".
To provide a neural network with capability to learn, the interconnection strengths between the "neurons" in the network must be modifiable. The neurons must be modifiable to provide a desired output when the network is presented with an input. To provide modifiable neurons in an electrical network greatly complicates the overall electronics. Moreover, electronic implementations appear to be inherently limited in the number of interconnections that can be made without regard to how difficult it is to change the strengths of the interconnections. It is believed unlikely that an electronic circuit providing for more that about 1,000,000 i.e. 1.times.10.sup.6 interconnections is feasible as cross-talk problems and problems with power requirements become overwhelming. In view of the shortcomings of electronic implementations, optical learning networks have been developed. Such a network is described in an article by K. Wagner and S. Psaltis which discloses a learning network utilizing a volume hologram comprising photorefractive crystals to interconnect nonlinear optical devices known in the art as Fabry Perot etalons. The interconnections are both made and weighted by the interference patterns in the photorefractive crystal. Learning commences with the presentation of an optical input to the network. The network initially will display an output which greatly differs from the desired output. To correct the variance between the actual and desired outputs, i.e. to induce the network to "learn", error signals are determined by taking the difference between the actual and desired outputs. The error signals are sent backwards through the photorefractive material as light rays to modify the interference patterns and thus the weight of the interconnections made by the hologram. Such modification is executed continuously until the generated output matches or very nearly matches the desired output whereafter the error signals are no longer permitted to propagate through the network.
The interconnection capacity of photorefractive material holograms also is inherently limited. The density of the interference patterns in the photorefractive material increases in proportion to the number of interconnections made by the hologram. As this density is increased, it becomes increasingly difficult to modify the weight of some interconnections without undesirably changing the strengths of other interconnections. These cross-talk problems in the interconnection capacity of photorefractive interconnecting holograms thereby inherently limit the number of modifiable interconnections which can be made in a neural learning network which incorporates such.