Most image sensing devices operate by projecting an image that is to be scanned onto an array of discrete image sensor elements (usually p-i-n diodes). The projected image is then determined by interrogating the condition of the sensor array elements. For example, FIG. 1 shows a 4.times.5 element section of an array 10 onto which an image having an edge 12 is projected. The term edge is used herein to mean the border defined by light illuminated areas and areas under ambient conditions. It is assumed that the area of the array 10 above the edge 12 is illuminated, while the area below the edge is dark.
The twenty elements, shown as the twenty squares 14, are organized into rows A through D, and columns R through V. To scan the image, the illumination state of each of the elements is determined using matrix addressing techniques. If a particular element is sufficiently illuminated, for example the element at row A, column R, the element is sensed as being at a first state (ON). If a particular element is not sufficiently illuminated, say the element at row D, column V, that element is sensed as being in a second state (OFF). If a particular element is partially illuminated, its state depends upon how much of the element is illuminated, and the intensity of that illumination. An interrogation of all of the illustrated elements of the array 10 results in the rather coarse approximation to the image as shown in FIG. 1, with the ON state elements in white and the OFF state elements in cross-hatch. This cross-hatched representation results from a binary thresholding of the pixel (sensor element) values. An alternative prior art implementation provides a continuous value for each pixel (gray scale). In both of these prior art implementations, the edge position information within a pixel is converted to a spatial average.
When using imaging scanners as described above, an increase in accuracy of the image approximation requires smaller and more numerous sensor elements. However, the difficulty of fabricating closely spaced, but isolated, sensor elements becomes prohibitive when attempting to fabricate page width imaging devices that have very high acuity (e.g., an acuity approaching that of the human visual system).
In addition to the discrete sensor elements described above, another type of light sensitive element, called a position sensitive detector, exists. An example of a position sensitive detector is the detector 200 shown in FIG. 2. This detector outputs photogenerated analog currents 202, 204, 206, and 208, that can be used to determine the position of the centroid of the illuminating spot 210. The centroid of the light spot in the x-direction (horizontal) can be computed from the quantity (I.sub.206 -I208)/(I.sub.206 +I.sub.208), while the centroid of the light spot in the y-direction (vertical) can be computed from (I.sub.202 -I.sub.204)/(I.sub.202 +I.sub.204), where I.sub.20x is the current from one of the lateral elements. At least partially because position sensitive detectors are typically large (say from about 1 cm.times.cm to 5 cm.times.5 cm), they have not been used in imaging arrays.
Ideally, an imaging device should be able to match the ability of the human visual system to determine edge positions, a capability known as edge acuity. Because of the difficulties in achieving high spatial resolution by increasing the pixel density, current image scanners cannot match the high edge acuity of human perception. Thus, new imaging and scanning techniques are necessary. Such new techniques would be particularly valuable if they could identify the positions of an edge to a fraction of the interpixel spacing. The ability to resolve edge spacings finer than the interpixel spacing is referred to as hyperacuity.