1. Field of the Invention
The present invention relates to a focus detection apparatus used in, e.g., a single-lens reflex camera.
2. Related Background Art
As a conventional focus detection system, a phase difference detection system is known.
The phase difference detection system will be described below with reference to FIG. 15. Light incident through a region 101 of an object-lens 100 is focused on an image sensor array A through a field mask 200, a field lens 300, an aperture opening 401, and a refocusing lens 501.
Similarly, light incident through a region 102 of the object-lens 100 is focused on an image sensor array B through the field mask 200, the field lens 300, an aperture opening 402, and a refocusing lens 502.
In a so-called near-focus state wherein the object-lens 100 forms a sharp image of an object in front of a predicted focal plane, a pair of object images focused on the image sensor arrays A and B are separated away from each other. Contrary to this, in a so-called far-focus state wherein a sharp image of the object is formed behind the predicted focal plane, the two object images approach each other. In a so-called in-focus state wherein a sharp image of the object is formed on the predicted focal plane, the object images on the image sensor arrays A and B relatively coincide with each other.
Therefore, the pair of object images are photoelectrically converted into electrical signals by the image sensor arrays. These signals are subjected to arithmetic processing to obtain relative positions of the pair of object images, thus calculating a focusing state of the object-lens 100, more particularly, a defocus amount representing a separation amount from the in-focus state and its direction.
An arithmetic processing method of calculating the defocus amount will be described below.
The image sensor arrays A and B shown in FIG. 15 respectively comprise pluralities of photoelectric transducers. As shown in FIGS. 16A and 16B, the arrays A and B output pluralities of photoelectric conversion outputs a1, . . . , an, b1, . . . , bn, and perform correlative calculations while relatively shifting the data strings by the predetermined number L of data. More specifically, each array calculates a correlation amount C(L) by the following equation: ##EQU1## where L is an integer corresponding to a shift amount of the data string, as described above, and the first term k and the final term r may be changed depending on the shift amount L.
Of the calculated correlation amounts C(L), a shift amount which yields a local minimum correlation amount is multiplied with a constant determined by the optical system and a pitch of the photoelectric transducers of the image sensor arrays A and B shown in FIG. 15, thus obtaining a defocus amount.
However, the correlation amount C(L) is a discrete value, as shown in FIG. 16C, and a minimum unit of a detectable defocus amount is restricted by the pitch of the photoelectric transducers of the image sensor arrays A and B. Thus, a method of calculating a new local minimum value C.sub.ex by performing interpolation calculations based on the discrete correlation amounts C(L) to execute precise focus detection is disclosed in U.S. Pat. No. 4,561,749.
The interpolation calculations are made using a correlation amount C.sub.0 as a local minimum value and correlation amounts C.sub.1 and C.sub.-1 separated by the same shift amount on two sides of the amount C.sub.0. A shift amount Fm which yields the local minimum value C.sub.ex and a defocus amount DF are given by the following equations: ##EQU2## where MAX (C.sub.a, C.sub.b } means to select a larger one of C.sub.a and C.sub.b, and Kf is the constant determined by the optical system and the pitch of the photoelectric transducers of the image sensor arrays A and B shown in FIG. 15.
It must be judged whether the defocus amount obtained in this manner indicates a true defocus amount or is caused by a variation in correlation amount due to noise components. When the following condition is satisfied, it can be judged that the defocus amount is reliable: EQU E&gt;E1 and C.sub.ex /E&lt;G1 (E1 and G1 are predetermined values) Condition (1)
where E is a value depending on the contrast of an object, and as the value E is larger, the contrast is higher, and reliability is also higher. C.sub.ex /E mainly depends on noise components, and as it is closer to 0, reliability becomes higher. When it is determined that the obtained defocus amount is reliable, the object-lens 100 is moved to an in-focus position based on the defocus amount DF.
In the focus detection system described above, reliable focus detection is disturbed unless object images formed on the image sensor arrays have a certain contrast or more. In general, objects to be photographed tend to have a higher horizontal contrast than a vertical contrast. Thus, the pair of image sensor arrays A and B are arranged on the photographing surface in a horizontal direction, so that focus detection is performed based on the horizontal contrast.
In another system, in consideration of a case wherein a horizontal contrast is low and a vertical contrast is high or a case wherein a camera is used in a vertical position, a pair of image sensor arrays A and B and a pair of image sensor arrays C and D are respectively arranged in both horizontal and vertical directions, as shown in FIG. 17A, so that focus detection can be performed based on both the horizontal and vertical contrasts.
In an optical system in this case, a field mask 20, a field lens 30, an aperture 40, a refocusing lens 50, and an image sensor chip 60 are arranged along the optical axis of an object-lens 10 in the order named. The field mask 20 has a cross-shaped opening, and is arranged near the predicted focal plane of the object-lens 10 so as to restrict an air image of an object focused by the object-lens 10. The aperture 40 has four openings 41, 42, 43, and 44, and these openings 41 to 44 are projected onto the object-lens 10 as opening images 11, 12, 13, and 14.
The refocusing lens 50 consists of four lenses 51, 52, 53, and 54 corresponding to the openings 41, 42, 43, and 44 of the aperture 40, respectively, as shown in FIG. 17B, and focuses an image of the field mask 20 on the image sensor chip 60.
Therefore, a light beam incident from the region 11 of the object-lens 10 is focused on the image sensor array A through the field mask 20, the field lens 30, the opening 41 of the aperture 40, and the lens 51 of the refocusing lens 50. Similarly, light beams incident from the regions 12, 13, and 14 of the object-lens 10 are respectively focused on the image sensor arrays B, C, and D.
Object images formed on the image sensor arrays A and B are separated from each other when the object-lens 10 is in a near-focus state. The images approach each other in a far-focus state. In an in-focus state, the images are aligned at a predetermined distance. Thus, the signals from the image sensor arrays A and B are subjected to arithmetic processing to detect a horizontal focusing state of the object-lens 10.
Similarly, object images formed on the image sensor arrays C and D are separated from each other when the object-lens 10 is in a near-focus state. The images approach each other in a far-focus state. In an in-focus state, the images are aligned at a predetermined distance. Thus, the signals from the image sensor arrays C and D are subjected to arithmetic processing to detect a vertical focusing state of the object-lens 10.
Whether the lens is driven based on a focusing state according to a horizontal or vertical contrast can be determined by, for example:
(1) a method of selecting a contrast having higher reliability (e.g., having the larger value E);
(2) a method of preferentially using one direction (e.g., horizontal direction) and performing focus detection using the other direction when no reliable result can be obtained or when no local minimum value C.sub.0 is present and calculations are disabled; and
(3) a method of averaging calculation results in both the directions.
In the focus detection apparatus described above, when a plurality of object images having different distances are formed on the image sensor arrays, an in-focus state may be erroneously determined by a near-or far-focus distance of an object from an in-focus position, or focus detection may be disabled.
Thus, in still another method disclosed in, e.g., Japanese Patent Laid-Open Nos. 60-262004 and 61-55618, U.S. Pat. No. 4,851,657, Japanese Patent Laid-Open No. 62-155608, and the like, each of a pair of image sensor arrays is divided into a plurality of blocks to segment an object, and focus detection calculations are performed in units of blocks. On the basis of a plurality of calculation results, a block in which the closest object is present or in which an object having the maximum contrast is present is selected. The calculation result of the selected block is determined as a focus detection state of an object-lens. In a camera, the object-lens is driven to an in-focus position based on the calculation result.
However, the conventional phase difference. detection system is ineffective for an object having a periodic contrast. When an image of an object having a periodic contrast is formed on image sensor arrays and image sensor array outputs shown in FIG. 18A and 18B are obtained, correlation amounts C(L) have a plurality of local minimum values C.sub.x, C.sub.y, and C.sub.z, as shown in FIG. 18C. As a result, a plurality of defocus amounts are calculated. In this case, a true defocus amount cannot be specified, and the object-lens may be driven based on a quite wrong defocus amount.
When each of a pair of image sensor arrays is divided into a plurality of blocks to segment an object, an object which does not have a periodic pattern before division may cause blocks to form periodic patterns after division.