An all-optical, continuous-time, recurrent neural network is disclosed which is capable of executing a broad class of energy-minimizing neural net algorithms.
The real-time (one millisecond) solution of certain high-dimensionality information processing problems (involving between 10.sup.4 and 10.sup.6 parallel input channels of information) would require Von Neumann (serial) computers with throughputs of between 10.sup.12 and 10.sup.15 arithmetic operations per second. Because no such Von Neumann computers are currently available, neural networks have been proposed to solve these problems. High-dimensionality problems such as image understanding (involving feature extraction), multispectral sensor fusion (involving associative recall), and path planning (involving constrained problem optimization) are, therefore, candidates for solutions using neural networks.
The architectures of neural networks are based on greatly simplified models of the brain and consist of a large number of highly interconnected, nonlinear processing elements or "neurons". Network inputs are typically noisy, incomplete, or distorted patterns of information; and, network outputs are decisions which consist of noise-free completed patterns, pattern features, inferred associations, or very-good-to-best solutions to constrained optimization (cost minimization) problems.
A number of electronic, opto-electronic, and optical neural networks based on the above architectural and functional paradigms have been described in the literature. Electronic neural networks consist of "electronic" neurons and electrical interconnections fabricated on a planar substrate (e.g., silicon). Electronic neural networks are typically fast (a decision can be made in times on the order of a microsecond or less depending on the size of the network) but, because of their planar geometries, can only accommodate a limited number (.ltoreq.10.sup.3) of adaptively and globally interconnected neurons. Examples of electronic neural networks include those described by L. D. Jackel, H. P. Graf, and R. E. Howard in "Electronic neural network chips," Applied Optics, Vol. 26, p. 5077 (1987) and by A. P. Thakoor, A. Moopenn, J. Lambe and S. K. Khanna in "Electronic hardware implementation of neural networks," Applied Optics, Vol. 26, p. 5085 (1987).
Opto-electronic neural networks incorporate opto-electronic neurons (photodetectors, electronic processing chips, and light-emitting diodes) and free-space optical interconnections. While these networks are, in principle, capable of supporting as many as 10.sup.3 adaptively and globally interconnected neurons, they are complex, slow and energy inefficient. Examples of opto-electronic neural networks include those described by N. H. Farhat in "Opto-electronic analogs of self-programming neural nets: architectures and methodologies for implementing fast stochastic learning by simulated annealing," Applied Optics, Vol. 26, p. 5093 (1987); by Y. Owechko in "Opto-electronic resonator neural networks," Applied Optics, Vol. 26, p. 5104 (1987); and by A. D. Fisher, W. L. Lippincott, and J. N. Lee in "Optical implementations of associative networks with versatile adaptive learning capabilities," Applied Optics, Vol. 26, p. 5039 (1987).
Optical neural networks incorporate "optical" neurons and three-dimensional optical interconnections. These networks seek to minimize architectural complexity while, at the same time, maximizing device throughput, adaptivity, and efficiency. Examples of optical neural networks include non-resonant architectures described by D. Psaltis in "Optical realizations of neural network models," Proceedings of the International Society for Optical Engineering, Vol. 700, p. 278 (1986); by E. G. Paek and D. Psaltis in "Optical associative memory using Fourier transform holograms," Optical Engineering, Vol. 26, p. 428 (1987); and by D. Psaltis and N. Farhat in "Optical information processing based on an associative-memory model of neural nets with thresholding and feedback," Optics Letters, Vol. 10, p. 98 (1985). Network-like optical architectures capable of storing and associatively recalling orthogonal memories (optical resonator eigenmodes) have also been described by D. Z. Anderson in Neural Networks for Computing, American Institute of Physics Conference Proceedings 151, New York (1986); by D. Z. Anderson and M. C. Erie in "Resonator memories and optical novelty filters," Optical Engineering, Vol. 26, p. 434 (1987); and by M. Cohen in "Design of a new medium for volume holographic information processing," Applied Optics, Vol 25, p. 2288 (1986). Finally, non-resonant, network-like architectures capable of content-addressable associative recall have been described by B. H. Soffer, G. J. Dunning, Y. Owechko, and E. Marom in "Associative holographic memory with feedback using phase-conjugate mirrors," Optics Letters, Vol. 11, p. 118 (1986); by A. Yariv and S-K. Kwong in "Associative memories based on message-bearing optical modes in phase-conjugate resonators," Optics Letters, Vol. 11, p. 186 (1986); and by A. Yariv, S-K. Kwong, and K. Kyuma in "Demonstration of an all-optical associative holographic memory," Applied Physics Letters, Vol. 48, p. 1114 (1986).