1. Field of the Invention
The present invention relates to a focusing detection apparatus for discriminating a focusing state of an object lens by detecting a relative positional relation of a plurality of object images.
2. Description of the Prior Art
As a focusing detection apparatus of a camera, an apparatus which detects a deviation between two images formed by dividing an exit pupil to determine an in-focus state has been known. For example, U.S. Pat. No. 4,185,191 issued on Jan. 22, 1980 discloses an apparatus which has a fly-eye lens arranged on an anticipated focal plane (which is conjugate with an imaging plane) of an imaging lens to form two images which are aberrated in accordance with a focusing error of the imaging lens. Japanese patent application Laid-Open Nos. 55-118019 laid open on Sept. 10, 1980 and 55-155331 laid open on Dec. 3, 1980 disclose a so-called secondary imaging process in which aerial images formed on an anticipated focus plane by two parallel arranged secondary imaging optical systems are directed to an image sensor to detect a positional deviation between the two images. The latter secondary imaging process requires a relatively long arrangement but does not need a special optical system such as a fly-eye lens which the former apparatus needs.
FIG. 1 schematically shows the secondary imaging process focusing detection apparatus. A field lens 3 is arranged coaxially with an optical axis 2 of an imaging lens whose focusing state is to be detected. Two secondary imaging lenses 4a and 4b are arranged behind the field lens 3 symmetrically with the optical axis 2. Photo-electric conversion element arrays 5a and 5b are arranged behind them. Irises 6a and 6b are arranged near the secondary imaging lenses 4a and 4b. The field lens 3 images an exit pupil of the imaging lens 1 on pupil planes of the two secondary imaging lenses 4a and 4b. As a result, light fluxes impinged to the secondary imaging lenses 4a and 4b exit from non-overlapping areas of equal size corresponding to the secondary imaging lenses 4a and 4b, on the exit pupil plane of the imaging lens 1. As the aerial image formed in the vicinity of the field lens 3 is refocused on the plane of the photo-electric conversion element arrays 5a and 5b by the secondary imaging lenses 4a and 4b, the positions of the two images on the photo-electric conversion element arrays 5a and 5b are shifted in accordance with a displacement of the aerial image in the optical axis direction. FIG. 2(A), 2(B) and 2(C) illustrate it. As shown in FIG. 2(A), the two images are at centers of the photo-electric conversion element arrays 5a and 5b in the in-focus state, as shown in FIG. 2(B), the two images are shifted away from the optical axis 2 in a far-focus state, and as shown in FIG. 2(C), the two images are shifted toward the optical axis 2 in a near-focus state. By photo-electrically converting the image intensity distribution and processing the resulting electrical signals to detect a positional deviation between the two images, the in-focus state can be determined.
One of the photo-electric converted signal processing systems is disclosed in U.S. Pat. No. 4,250,376. The following operation is carried out in analog or digital fashion. ##EQU2## where N is the number of photo-electric elements of the photo-electric conversion element array 5a or 5b, a(i) and b(i), are output signals of i-th photo-electric conversion elements of the photo-electric conversion element arrays 5a and 5b and V is a correlation.
The imaging lens 1 is driven out or in depending on whether the correlation V is positive or negative. In the signal processing system in accordance with the formula (1), only the direction of drive of the imaging lens 1 is determined.
In the focus detection apparatus which determines the in-focus state based on the deviation between images, it has been known to calculate the distance of movement of the image lens 1 by relatively displacing one of the images relative to the other based on the fact that the deviation between the two images is proportional to a focusing error. This method is old in a base line ranging type focusing detection apparatus, and it has also been known in a TTL type focusing detection apparatus, as shown by U.S. Pat. No. 4,387,975 issued on Jan. 14, 1983 and U.S. Pat. No. 4,333,007 issued on June 1, 1982. In those apparatuses the photo-electric converted signals of the images are converted by an A/D converter to a multi-bit digital data and the deviation between the two images is calculated by a microcomputer mounted in a camera to determine the focusing error. For example, the image represented by b(i) is relatively moved in the processing system relative to the image represented by a(i) and the amount of movement required for the coincidence of the two images is calculated to provide the deviation between the images. Namely, an operation of EQU V.sub.m =.SIGMA..vertline.a(i)-b(i+1+m).vertline.-.SIGMA..vertline.a(i+1)-b(i+m).v ertline. . . . (2)
is repeatedly carried out while sequentially allocating integers within a predetermined range to a relative displacement m, to determine the relative displacement m which presents zero correlation V.sub.m. Assuming that the correlation V.sub.m changes as shown in FIG. 3 when the relative displacement m changes within a range of -4.ltoreq.m.ltoreq.+4, an image deviation corresponding to a pitch of 1.5 is determined because the correlation V.sub.m should be zero when the two images coincide.
The present applicant has proposed a manner to determine the direction of movement of the imaging lens in accordance with formula (3) or (4) shown below. ##EQU3## where min{x,y} represents a smaller one of two real numbers x and y, and max{x,y} represents a larger one of the two real numbers x and y. The present applicant has also disclosed a calculation method of the image deviation by using the formulas (3) and (4). For example, an image represented by b(i) of the formula (3) is moved relative to an image represented by a(i), and the following operation is carried out for each integer of the relative displacement m to determine the relative displacement m for V.sub.m =0 EQU V.sub.m =.SIGMA. min{a(i), b(i+1+m)}-.SIGMA.min{a(i+1), b(i+m)} . . . (5)
Similarly, for the formula (4), the following operation is carried out. EQU V.sub.m =.SIGMA. max{a(i), b(i+1+m)}-.SIGMA. max{a(i+1), b(i+m)} . . . (6)
When the formulas (2), (5) and (6) are used, the relative displacement m for V.sub.m =0 is usually not an integer. Accordingly, it is usual to search the relative displacement m which causes a reversal of sign adjacent correlations V.sub.m and V.sub.m+1 (that is, V.sub.m.V.sub.m +1.ltoreq.0) and interpolate a value. Since the number of the relative displacements m which meet the condition of V.sub.m.V.sub.m +1.ltoreq.0 is not always one, .vertline.V.sub.m -V.sub.m +1.vertline. is operated for each of the m which meet V.sub.m.V.sub.m +1.ltoreq.0 and the relative displacement m which presents a largest change in the correlation V.sub.m is selected as the relative displacement m.
In the formulas (1), (3) and (4), the photo-electric converted signals a(i) and b(i) are shifted by one pitch respectively, and the set of the shifted a(i) and the non-shifted b(i) and the set of the non-shifted a(i) and the shifted b(i) are processed and a difference between the processing results for each i is calculated. Thus, the formulas (1), (3) and (4) are rewritten as EQU V=.SIGMA.{a(i).quadrature.b(i+1)}-.SIGMA.{a(i+1).quadrature.b(i)} . . . (7)
where x.quadrature.y represents an operational relation for two real numbers x and y. This is illustrated in FIG. 4(A) in which data sets to be processed are connected by solid lines or broken lines. The set of two data connected by the digital broken line represents an operation a(i).quadrature.b(i+1) for the first sum in the right side of the formula (7), and the set of two data connected by the diagonal solid line represents an operation a(i+1).quadrature.b(i) for the second sum of the right side of the formula (7). Similarly, the formulas (2), (5) and (6) are rewritten as EQU V.sub.m =.SIGMA.{a(i).quadrature.b(i+1+m)}-.SIGMA.{a(i+1).quadrature.b(i+m)}. . . (8)
This is illustrated in FIG. 4(B). In FIG. 4(B), the operation is carried out for all areas in which the two images overlap. In this method, the length of the operation area varies depending on the relative displacement m. As a result, undesirable result is obtained if a high intensity object is present at a position slightly displaced from an area under measurement. To avoid the above inconvenience, the operation area length may be unified to the shortest length so that the same operation length is used to all relative displacements m. In FIG. 4(B) the operation area length is unified to that of n=.+-.2.
In the prior art method shown by the formulas (7) and (8), the deviations which are bases of the operations of the first sum and the second sum in the formulas (7) and (8) differ from each other by two pitches, as seen from FIGS. 4(A) and 4(B). For example, a(2) has an operational relation with b(1) and b(3). On the other hand, in the image deviation system focusing detection method, the relative shift of two images in processing in equivalent to the change of the focusing error in a sense of simulation and the shift of portion by one pitch corresponds to a predetermined change in the focusing error. Accordingly, if the focusing error corresponding to the two-pitch image deviation is small and a high density simulation is attained, a high precision focusing detection operation is possible accordingly. However, to this end, an element pitch of the photo-electric conversion element arrays 5a and 5b must be small, and for a given size of the area to be measured, the data quantity increases and a load to the electrical processing circuit increases as the pitch becomes small. For example, in a processing system which uses a microcomputer, the increase of the data quantity directly leads to the increase of a data memory capacity, the increase of a cost, the increase of a processing time and reduce a real-time ability of the focusing detection apparatus. Further, the photo-sensitive areas of the photo-electric conversion elements of the arrays 5a and 5b reduces and a sensitivity reduces. Thus, there is a limit in reducing the pitch of the photo-electric conversion element arrays 5a and 5b. As a result, the focusing error for the two-pitch image deviation cannot be significantly reduced and the precision of the operation is not high. Accordingly, in the prior art processing system, it is difficult to improve the discrimination precision.