Various kinds of solid state image sensors are known. Many, for example, like those of the charge-coupled device type or the bucket-brigade type, require for the active photosensitive layer a high quality monocrystalline semiconductor layer, such as monocrystalline silicon. Because of the difficulty in making large area layers of this kind, such devices typically use active regions of limited area and need optical systems to scale down the image size to match such limited area. Sensors of this kind tend to be expensive, although they are the most sensitive.
There are forms known which do not require monocrystalline semi-conductive layers and so readily can have an active region of relatively large area. Typically, these use as the photosensitive material, either compound semiconductors, such as CdS and As-Se-Te, or elemental semiconductors such as amorphous silicon. Of these, there are two basic forms. The planar form uses spaced electrodes on the same surface of the active photosensitive layer, the sandwich form uses the photosensitive layer sandwiched between a large area common electrode and an array of spaced discrete electrodes.
The present invention is of the sandwich type. Typically in the past, such a sensor has comprised a large area electrode on one surface of a layer of amorphous silicon and an array of spaced discrete electrodes on the opposite surface of the layer. Each of the discrete electrodes corresponds to a picture element, or pixel, of the picture scene. One problem of an image sensor of this kind has been that it is not well adapted to provide sharp images. In particular, there is a tendency for charge carriers photogenerated in a region or picture element of high light intensity to diffuse laterally in the layer and be captured by an electrode associated with a picture element of low light intensity, which results in smearing of sharp transitions in the picture scene.