Programmable resistors can be utilized in a number of analog signal processing applications such as resistive ladders in digital to analog converters and in resistor arrays in neural networks. Neural networks provide a means for solving random problems, such as in sensing systems, which must "learn" the surrounding environment so that it can be categorized by experience. Neural networks solve these sensing problems by expressing the sensor outputs as multi-dimensional vectors and then "learn" the vectors by constructing a matrix by correlation methods.
Neural networks use arrays formed by rows and columns of weighting elements, represented by resistors, to create matrix vectors of voltages input from corresponding sensors using Ohm's law. Operational amplifiers sum the currents resulting from the drop of an input voltage across the resistors in each of the rows. The current output from each row represents the vector product for one component of a corresponding output vector. By storing data in terms of resistor conductance values, an environment can be "learned" and later retrieved by associative recall. Thus, the resistors must change with the system experience to "learn" an environment. Further, neural network systems can optimize the learned patterns of various environments by varying the resistive elements in the network. In general, when used in applications such as adaptive neural networks, the programmable resistors must withstand a large number of programming cycles as data is "learned".
Various means have been devised for providing variable weighting elements in neural networks. Each of these means has been found to have significant disadvantages. For example, circuitry using up-down counters and decoded switches along with fixed resistors could be used to generate the appropriate weights; however, such an approach would be limited to only a small number of weighting elements, thereby limiting the complexity and utility of the network. Electrically-erasable, electrically-programmable read-only memories (EEPROM) provide a second option; however, the limited programming lifetime of these devices make them impractical. Finally, dynamic random-access memories (DRAMs) have been considered, but DRAMs need refreshing after a read operation which greatly increases the number of overhead operations required in the overall scheme of the neural network application.