Charge-coupled devices (CCDs) are used in conventional imaging circuits for converting the light incident on a pixel into an electrical signal that is proportional to the intensity of the incident light. In general, CCDs utilize a photogate to convert the incident photons into an electrical charge, and a series of electrodes to transfer the charge collected at the photogate to an output node.
Although CCDs have many strengths including a high sensitivity and fill-factor, CCDs also suffer from a number of weaknesses. Most notable among these weaknesses, which include limited readout rates and dynamic range limitations, is the difficulty in integrating CCDs with CMOS-based microprocessors.
To overcome the limitations of CCD-based imaging circuits, imaging circuits based on active pixel sensor cells have been developed. In an active pixel sensor cell, a conventional photodiode is combined with a number of active transistors which, in addition to forming an electrical signal representative of the output of the photodiode, provide amplification, readout control, and reset control. Arrays of active pixel sensor cells can be used in multimedia applications requiring low-cost and high functionality to acquire high quality images at video frame rates. Because the elements of an active pixel sensor are fabricated using a CMOS process flow, the sensor may easily be integrated into more complex CMOS-based devices to produce combined sensor-signal processor devices.
FIG. 1 shows an example of a conventional CMOS active pixel sensor cell 10. As shown in the figure, cell 10 includes a photodiode 12 connected to a first intermediate node (labelled "node 1" in the figure), and a reset transistor 14 that has a drain connected to a power supply node N.sub.PS a source connected to node 1, and a gate connected to a first input node (labelled "reset" in the figure).
Cell 10 further includes a buffer transistor 16 and a row-select transistor 18. Buffer transistor 16 has a drain connected to node N.sub.PS, a source connected to a second intermediate node (labelled "node 2" in the figure), and a gate connected to node 1. Row-select transistor 18 has a drain connected to node 2, a source connected to a third intermediate node (where the source line intersects the column data line in the figure), and a gate connected to a second input node (labelled "row select" in the figure).
The operation of cell 10 begins by briefly pulsing the gate of reset transistor 14 with a reset voltage V.sub.RESET at time t.sub.1. The reset voltage V.sub.RESET, which is equal to Vcc (typically, +5V), resets the voltage on photodiode 12 to an initial integration voltage and begins an image collection cycle.
At this point, the initial integration voltage on photodiode 12 (as measured at node 1) is defined by the equation V.sub.RESET -V.sub.T14 -V.sub.CLOCK, where V.sub.T14 represents the threshold voltage of reset transistor 14, and V.sub.CLOCK represents reset noise from the pulsed reset voltage (assumed to be constant). Similarly, the initial integration voltage as measured at node 2 is defined by the equation V.sub.RESET -V.sub.T14 -V.sub.CLOCK -V.sub.T16, where V.sub.T16 represents the threshold voltage of buffer transistor 16 (functioning as a source follower).
After the reset voltage V.sub.RESET has been pulsed and the voltage on photodiode 12 (as measured at node 1) has been reset, a row-select voltage V.sub.RS is applied to the second input node (row select) at a time t.sub.2 which immediately follows the falling edge of the reset pulse V.sub.RESET. The row select voltage V.sub.RS causes the voltage on node 2, which represents the initial integration voltage of the cycle, to appear on the third intermediate node (where the source of row select transistor 18 intersects the column data line). Detection and calculation circuit 20 connected to the column data line then amplifies, digitizes, and stores the value of the initial integration voltage as it appears on the third intermediate node.
Next, from time t.sub.2, which represents the beginning of the image collection cycle, to a time t.sub.3, which represents the end of the image collection cycle, light energy, in the form of photons, strikes photodiode 12, thereby creating a number of electron-hole pairs. Photodiode 12 is designed to limit recombination between the newly formed electron-hole pairs. As a result, the photogenerated holes are attracted to the ground terminal of photodiode 12, while the photogenerated electrons are attracted to the positive terminal of photodiode 12, where each additional electron reduces the voltage on photodiode 12 (as measured at node 1). Thus, at the end of the image collection cycle, a final integration voltage will be present on photodiode 12.
At this point (time t.sub.3), the final integration voltage on photodiode 12 (as measured at node 1) is defined by the equation V.sub.RESET -V.sub.T14 -V.sub.CLOCK -V.sub.S, where V.sub.S represents the change in voltage due to the absorbed photons. Similarly, the final integration voltage as measured at node 2 is defined by the equation V.sub.RESET -V.sub.T14 -V.sub.CLOCK -V.sub.T16 -V.sub.S.
At the end of the image collection cycle (time t.sub.3), the row-select voltage V.sub.RS is again applied to the row select input node. The row select voltage V.sub.RS causes the voltage on the second intermediate node, which represents the final integration voltage of the cycle, to appear on the third intermediate node. Detection and calculation circuit 20 then amplifies and digitizes the value of the final integration voltage as it appears on the third intermediate node.
Following this, detection and calculation circuit 20 determines the number of photons that have been collected during the integration cycle by calculating the difference in voltage between the digitized final integration voltage taken at time t.sub.3 and the digitized stored initial integration voltage taken at time t.sub.2. At this point, the difference is voltage is defined by the equation (V.sub.RESET -V.sub.T14 -V.sub.CLOCK -V.sub.T16)-(V.sub.RESET -V.sub.T14 -V.sub.CLOCK -V.sub.T16 -V.sub.S) , thereby yielding the value V.sub.S.
Once the final integration voltage has been digitized by the detection and calculation circuit, the reset voltage V.sub.RESET is again applied to the first input node at time t.sub.4, which immediately follows the rising edge of the row select voltage V.sub.RS at time t.sub.3. The reset voltage V.sub.RESET again resets the voltage on photodiode 12 to begin another image collection cycle.
Image processing is normally performed after the image is captured, converted to a digital format, moved to a main memory, and operated upon by the processing unit, where detection and calculation circuit 20 may be part of the processing unit. Each of these operations requires the consumption of power and can limit the maximum throughput rate for video signals, since the data processing is significantly delayed from the time at which the data is collected. These factors are important for portable imaging applications, which represents a primary area of growth at the present time for active pixel sensors.
FIG. 2 shows a model for a human neuron 100. Neuron 100 has as inputs a set of signals, V.sub.i 112. Neuron 100 performs two primary functions: (1) a weighted summing of the product of input signals 112 and a set of weights W.sub.i 114, producing the expression .SIGMA.(V.sub.i W.sub.i), to which is added a bias value (b) 116; and (2) a thresholding of the summed value .SIGMA.(V.sub.i W.sub.i)+b by means of a sigmoid function 118. If the summed value exceeds the threshold value as determined by the sigmoid function, then the neuron "fires", i.e, produces an output signal, V.sub.o 120.
Neural networks composed of interconnected floating gate MOSFETs which implement the human neuron model of FIG. 2 may be constructed and used to implement digital signal processing algorithms for the purpose of processing images. These functions are typically performed on digitized maps of the data gathered by an imaging array in the form of a post-data collection image processing algorithm. However, this approach has several disadvantages. Each of the operations used to prepare the data for processing after collection of the array data requires the consumption of power and can limit the maximum throughput rate. In contrast, image processing at the focal plane level (local, pixel level processing) would be faster, permitting image correction and improving image quality. Local processing would also reduce system power consumption and improve throughput.
An interesting aspect of the development of active pixels is that the smallest practical pixel size is limited by physical principles (vibration and the diffraction limited resolution, which is a function of wavelength and aperture size), while the feature size of the active transistors continues to shrink. This is in contrast to devices such as memory cells, wherein the cell size shrinks with the feature size so that the number of transistors per cell remains fixed. This feature of active pixel sensors means that the number of transistors which may be incorporated into each active pixel will continue to increase as the smallest feature size continues to decrease. This permits the use of focal plane level image processing, wherein processing capability is incorporated at the pixel level in the form of additional transistors.
What is desired is a structure for an active pixel image cell which includes an embedded neuron transistor to permit focal plane level image processing for neural network applications. These and other advantages of the present invention will be apparent to those skilled in the art upon a reading of the Detailed Description of the Invention together with the drawings.