1. Field of the Invention
The present invention relates to a focusing position detecting and automatic focusing apparatus and, more particularly, to a focusing position detecting apparatus used for focus adjustment of an optical device such as a camera or a microscope, a focusing position detecting apparatus for receiving sample image light from a microscope or the like by means of an image sensor and detecting a focusing position on the basis of a resulting scanning signal, and an automatic focusing apparatus for an optical device, e.g., a microscope, which can be used in combination with these focusing position detecting apparatuses.
2. Description of the Related Art
As a conventional automatic focusing apparatus for a microscope or the like, an apparatus disclosed in, e.g., Published Unexamined Japanese Patent Application No. 61-143710 is known. In this automatic focusing apparatus, a focusing position calculation method is basically fixed to only one kind of focusing position calculation method. In this automatic focusing apparatus, initialization is performed in accordance with the flow chart shown in FIG. 23. Firstly, a focusing operation is performed while the focusing position calculation method for normally used samples is fixed, thus positioning an objective optical system and a target object to set a proper distance therebetween by an automatic focusing function (step S1). In step S2, it is checked whether the focusing position of the positioned target object is optimal. If it is not the optimal focusing position for a user, offset adjustment is performed with respect to the focusing position set by the automatic focusing function (step S3). With this operation, the focusing position set by the automatic focusing function is optimized, thus completing the initialization (step S4).
Another conventional automatic focusing apparatus is disclosed in Published Examined Japanese Patent Application No. 62-32761. In this apparatus, a high-frequency component is extracted from the spatial frequency components of a target object image formed by a phase contrast optical system. Subsequently, a focusing position is obtained on the basis of the relative distance between the phase contrast optical system and the target object at which the high-frequency component of the spatial frequency components exhibits the maximum value when the high-frequency component has the second peak within a predetermined distance range from this relative distance.
In the automatic focusing apparatus disclosed in Published Unexamined Japanese Patent Application No. 61-143710, however, since a focusing position calculation method applicable for normally used samples is fixed, a focusing position set by this method becomes unstable, if, for example, a special sample is to be focused. Therefore, a focusing operation with high reproducibility cannot be expected.
Furthermore, in the automatic focusing apparatus disclosed in Published Examined Japanese Patent Application No. 62-32761, since a focusing position is solely determined on the basis of the relative distance between the optical system and the target object a which the high-frequency component of the spatial frequency components exhibits the maximum value, it cannot reflect a subtle difference in optimal focusing position between users.
As a conventional focusing position detecting optical system, an optical system having the arrangement shown in FIG. 24A is known. Light-receiving element arrays S.sub.A and S.sub.B are arranged before and behind a focal plane F, of an optical system L for forming an object image, in the direction of the optical axis of the optical system L, at the same distance from the focal plane F while the respective element arrays are perpendicular to the optical axis. Output signals from these light-receiving element arrays S.sub.A and S.sub.B are converted into evaluation values by predetermined evaluation functions, and a focusing position is detected on the basis of the evaluation values.
FIGS. 24B and 24C respectively show relationships between evaluation values and image forming positions at different magnifications. As shown in FIGS. 24B and 24C, a focusing position is the position where a difference .DELTA.V between an evaluation value V.sub.A of an output signal from the light-receiving element array S.sub.A and an evaluation value V.sub.B of an output signal from the light-receiving element array S.sub.B becomes 0. In the conventional apparatus, therefore, a focusing state is detected on the basis of the comparison between the evaluation values V.sub.A and V.sub.B in the following manner. If V.sub.A &gt;V.sub.B, a forward-focus state is detected. If V.sub.A &lt;V.sub.B, a backward-focus state is detected. If V.sub.A =V.sub.B, an in-focus e.g., just focusing state is detected.
In the above-described focus detection, however, since the N.A. on the image side is greatly changed when the magnification of the optical system L is changed, the difference AV between the evaluation values V.sub.A and V.sub.B corresponding to the light-receiving element arrays S.sub.A and S.sub.B is decreased depending on the magnification of the optical system L. For this reason, it is sometimes difficult to determine a near-focus state or a far-focus state. Furthermore, an evaluation value V.sub.F at the focal plane F may be reduced to make determination of an in-focus state difficult.
If, for example, the distances between the focal plane F and the pair of light-receiving element arrays S.sub.A and S.sub.B are set in accordance with a low magnification of an optical system as shown in FIG. 24C, when the magnification of the optical system is increased, the N.A. on the image side is greatly increased as compared with the case of the low magnification, thus increasing the depth of focus on the image side, as shown in FIG. 24B. As a result, the difference .DELTA.V between the evaluation values V.sub.A and V.sub.B is reduced to make determination of a defocus direction difficult.
In contrast to this, assume that the distances between the focal plane F and the light-receiving element arrays S.sub.A and S.sub.B are set in accordance with a high magnification of the optical system. In this case, when the magnification of the optical system is changed to a low magnification, the N.A. on the image side is increased as compared with the case of a high magnification, thus decreasing the depth of focus on the image side. As a result, the evaluation value at the focal plane F becomes a small value. Moreover, a region (dead zone) where .DELTA.V=0 continuously appears is generated near the focal plane F. For these reasons, determination of an in-focus state cannot be performed.
In a conventional apparatus, therefore, distances lA and lB between the focal plane F and the light-receiving element arrays S.sub.A and S.sub.B are changed within a range of .DELTA.l in accordance with switching of magnifications of the optical system L, as shown in FIG. 25A. More specifically, if the distances between the focal plane F and the light-receiving element arrays S.sub.A and S.sub.B are set to be lA.sub.1 and lB.sub.1 when the magnification of the optical system is low, as shown in FIG. 25B, the evaluation values V.sub.A and V.sub.B calculated from output signals from the light-receiving element arrays S.sub.A and S.sub.B exhibit a dead zone, in which .DELTA.V=0 continuously appears, near the focal plane F, as shown in FIG. 25C. For this reason, the distances between the focal plane F and the light-receiving elements S.sub.A and S.sub.B are changed by .DELTA.l to be lA.sub.2 and lB.sub.2. With this operation, the evaluation values V.sub.A and V.sub.B shown in FIG. 25D are obtained, and hence the difference .DELTA.V is increased to a value large enough to detect a focusing position.
In the technique shown in FIGS. 25A to 25D, however, since the optical path lengths between the focal plane F and the light-receiving element arrays S.sub.A and S.sub.B are changed, driving units for moving the light-receiving element arrays S.sub.A and S.sub.B along the optical axis are required, and the optical path lengths must be adjusted to optimal optical path lengths in accordance with the magnification of the optical system. Therefore, the apparatus requires complicated control and arrangement.
In addition, video information obtained by the light-receiving element arrays at a specific magnification of the optical system is limited to that having a specific difference in optical path length. This deteriorates the focusing precision of the apparatus.
According to another conventional focusing position detecting apparatus, a line image sensor is arranged near the focal plane of an image forming optical system. In this apparatus, a scanning signal of an image projected on the line image sensor is input to an arithmetic unit, and an optical lens system is controlled by executing an arithmetic operation on the basis of image information, such as the contrast of the image, included in the scanning signal obtained by line scanning, thereby obtaining an optimal focusing result.
Samples as observed images, however, vary widely in shape and distribution. For example, as shown in FIG. 26, the components of a sample 91 may be scattered in an image, so that a pixel array of the line image sensor (scanning line) cannot cover the image of the sample 91. In such a case, since a scanning signal output from the line image sensor includes no image information of the sample on which the optical system is to be focused, a focusing position cannot be detected.
In order to solve such a problem, the following technique has been proposed. The image surface of an image (to be read by the line image sensor) formed by the image forming optical system is scanned from a read start position to a read end position to read the image. The read image is then converted into a new image. Thereafter, the line image sensor is moved from the read end position to the read start position. In the process of this movement, the line image sensor is stopped at a position where image information is present. By using a scanning signal at this time, a focusing position is detected. This technique is disclosed in, e.g., Published Examined Japanese Patent Application No. 63-37363.
The focusing position detecting apparatus in the above publication has the arrangement shown in FIG. 27. A line image sensor 93 is moved to a movement start position by a line image sensor moving unit 95 in response to a command from an arithmetic control unit (CPU) 94. Under the control of the CPU 94, the line image sensor 93 is driven by a line image sensor driver 96, and a scanning signal output from the line image sensor 93 is input to a line image sensor output processor 97 to be converted into a digital signal. The digital signal is then fetched by the CPU 94. The CPU 94 performs arithmetic processing for the scanning signal data to determine the presence/absence of information required for a focusing position detection operation. The above-described operation is repeated until information required for a focusing position detecting operation is obtained, i.e., until the scanning line is located at a position where a sample is present. In the apparatus shown in FIG. 27, the line image sensor 93 is moved to the optimal position for a focusing position detecting operation, and a scanning signal of an image is fetched by the CPU 94 in this manner to control an optical lens system (not shown), thereby detecting a focusing position.
In the focusing position detecting apparatus shown in FIG. 27, however, since the line image sensor moving unit 95 is required to move the line image sensor 93, the apparatus is undesirably increased in size.
Furthermore, in the apparatus shown in FIG. 27, since arithmetic processing of linear image information must be repeatedly performed by moving the line image sensor 93, it takes much time to detect a focusing position.