An optical encoder measures a position of an object, either angular or linear, by optically detecting marks on a scale attached to the object to move with the object. In the simplest form, the encoder simply measures translation by counting the number of marks that move past the encoder's optical detector.
In a common form of such an encoder, a fixed scale and a moving scale, which have identical transparent markings on opaque backgrounds, are interposed between a light source and the detector. The relative locations of the transparent markings determine the amount of light which is allowed to be transmitted through each marking, e.g., full transmission, 1/2 or 1/4 transmission, or none at all. Of course, such an encoder can measure only relative displacement with respect to a reference position--not absolute position.
In a conventional absolute encoder, each position is given not simply by just one mark, but by a unique code pattern of marks which identifies the absolute position. A change in position is sensed by detecting a change in the code bits which make up the code pattern. Some absolute encoders can derive position information at rates higher than 100 kHz.
In an absolute encoder such as the one just described, sensitivity is limited to the size of the smallest code bit which can be recorded, which is in turn limited by physical optics to about the wavelength of the light used to record and detect the code patterns. Thus, the best sensitivity available from such an absolute encoder is somewhat less than 1 .mu.m of translation. Also, such an encoder is limited in the amount of travel that it can accommodate. For instance, such an encoder which uses 12-bit code patterns can encode up to 2.sup.12 =4,096 positions. With a sensitivity of just under 1 .mu.m, the maximum travel which can be detected is around 4,000 .mu.m, or four millimeters. Moreover, because the code bits themselves are detected, damage to the scale can result in dead spots in which derived position information is anomalous.