1. Field of the Invention
The present invention relates to a focus state detection device used in a camera, video equipment or the like.
2. Description of Related Art
A focus state detection device is known that detects the focus adjustment state of the shooting lens in a camera, video equipment or the like.
FIG. 8 shows a focus state detection device employing the phase difference method. Light rays that are incident on the region 101 in the shooting lens 100 pass through a field of vision mask 200, a field lens 300, a diaphragm aperture 401 and a re-imaging lens 501 and are composed into an image on image sensor array A. On the image sensor array A, a plurality of photoelectric converter elements that generate an output corresponding to the intensity of the incident light are aligned in a one-dimensional manner. Similarly, light rays that are incident on the region 102 in the shooting lens 100 pass through a field of vision mask 200, a field lens 300, a diaphragm aperture 402 and a re-imaging lens 502 and are composed into an image on image sensor array B.
The two subject images formed on these image sensor arrays A and B are farther apart in the so-called front focus state, wherein the shooting lens 100 composes a clear image of the subject in front of the predicted focussing plane. Conversely, the images are closer together in the so-called back focus state, wherein the shooting lens 100 composes a clear image of the subject in back of the predicted focussing plane. At the so-called in-focus time when a clear image of the subject is formed precisely on the predicted focussing plane, the subject images on the image sensor arrays A and B relatively coincide.
Accordingly, by changing the pair of subject images into electrical signals through photoelectric conversion on the image sensor arrays A and B and by processing these signals to find the shift amount in the relative positions of the pair of subject images, it is possible to find the amount of difference from the focus adjustment stator, in-focus state, of the shooting lens 100. This difference is called the defocus amount. The direction of the shift is also ascertainable. The focus state detection region is the area of overlap near the predicted focussing plane of the image sensor arrays A and B as projected by the re-imaging lenses 501 and 502. As shown in FIG. 9 the focus state detection region is generally positioned in the center of the photo field.
Next, the conventional method of calculating the defocus amount will be described.
The image sensor arrays A and B are each composed of a plurality of photoelectric converter elements. The elements output a plurality of photoelectrically converted output signal strings a1-an and b1-bn as shown in FIGS. 10a and 10b. Furthermore, the pair of data strings undergoes a correlation algorithm while being shifted by a preset relative data amount L. Calling the maximum shift number lmax, the range of L is -lmax to +lmax. Specifically, the correlation amount C[L] is calculated using formula 1. EQU C[L]=.SIGMA..vertline.ai-bj.vertline. (1)
Here, .SIGMA. indicates the sum over i=k to r. In addition, j-i=L, where L=-lmax, . . . ,-1,0,1, . . . ,+lmax.
The L in formula 1 is an integer corresponding to the shift amount in the data strings as described above. The first term k and the last term r are dependent upon the shift amount L and can be changed. The shift amount in the relative positions is the shift amount L when the pair of data strings coincides. Therefore, the shift amount L that gives the smallest correlation amount out of the correlation amounts C[L] is detected. The defocus amount is this shift amount multiplied by a constant found from the pitch width of the photoelectric converter elements in the image sensor array and the optical system shown in FIG. 8. However, the correlation amounts C[L] are widely dispersed values as shown in FIG. 10c, and the smallest unit of the defocus amounts that can be detected is limited by the pitch width of the photoelectric converter elements in the image sensor arrays A and B.
A method wherein precision focus state detection is performed by conducting an interpolation algorithm on the basis of the widely dispersed correlation amounts C[L], and through this calculating a new truly smallest value Cex, as disclosed by the present applicant in U.S. Pat. No. 4,561,749. This is a method wherein the true smallest value Cex and the shift amount Ls that corresponds to Cex are calculated from formulas 2 and 3 using correlation amount C[l], which is the smallest amount, and correlation amounts C[l+1] and C[l-1], which are the shift amounts to either side, as shown in FIG. 11. EQU DL=(C[l-1]-C[l+1])/2 EQU Cex=C[l]-.vertline.DL.vertline. EQU E=MAX{C[l+1]-C[l], C[l-1]-C[l]} (2)
Here, MAX{Ca, Cb} means to select the larger of Ca and Cb. EQU Ls=l+DL/E (3)
Furthermore, the defocus amount DF is calculated from formula 4 using the shift amount Ls. EQU DF=Kf.times.Ls (4)
Here, Kf is a constant found from the pitch width of the photoelectric converter elements in the image sensor arrays and focus state detection optical system shown in FIG. 8.
It is necessary to determine whether the defocus amount thus obtained represents the true defocus amount or is a result of fluctuation in the correlation amount caused by noise or the like. The defocus amount is deemed reliable when the condition shown in formula 5 is met. EQU E&gt;E1 and Cex/E&lt;G1. (5)
Here, E1 and G1 are specific threshold values.
The numerical value E shows the condition of the change in the correlation amount. E depends on the contrast in the subject. The larger the value of E is, the higher the contrast and reliability. The smallest value Cex is the difference when the two data items most nearly coincide, and in the original state, Cex is 0. However, because of the effects of noise and furthermore because there is parallax between region 101 and region 102, as shown in FIG. 8, a minute difference is created between the pair of subject images, so the smallest value Cex does not become 0. Furthermore, because the effects of noise and the difference in subject images become smaller the larger the contrast in the subject is, Cex/E is used as the numerical value indicating agreement between the two data items. The closer Cex/E is to 0, the higher the reliability and the greater the agreement between the two data items. When a determination is made that reliability exists, driving of the shooting lens on the basis of the defocus amount DF, or a display, is conducted. Hereinafter, the correlation algorithm, the interpolation algorithm and the state determination together will be called the focus state detection algorithm.
With the above-described focus state detection device, however, problems arise when plural subjects having different photographic distances are composed into images on the image sensor arrays. For example, the case can be considered wherein a primary subject P and a background BL positioned far from one another are intermixed in the focus state detection region, as shown in FIG. 5a. When the shooting lens 100 is focussed on the background BL, the part of the pair of data items (A array data shown by the solid line, and B array data shown by the dotted line) corresponding to the pattern of the background BL coincides relatively well. However, a discrepancy is created in the part corresponding to the primary image P, as shown in FIG. 5b. Accordingly, a shift amount so that the pair of data items coincides does not exist. The smallest value Cex becomes a large value. Focus state detection is impossible because Cex/E does not satisfy the condition in formula 5.
In the present specification, when several subjects having different photographic distances are intermixed within the subject field, the resultant subject will be called a perspective conflict subject.
The focus state detection region is subdivided by dividing each of the two image sensor arrays into a plurality of blocks. The defocus amount Df is calculated by executing the focus state detection algorithm on each of these blocks. Furthermore, a focus state detection method is disclosed in U.S. Pat. No. 4,977,311 wherein the block with the defocus amount indicating the closest distance, for example, and the block with the maximum numerical value E are selected out of the plurality of blocks. The defocus amount of the block is set as a final defocus amount indicating the focus adjustment state of the shooting lens. Driving of the shooting lens is conducted in accordance with the final defocus amount.
In addition, in U.S. Pat. No. 4,914,282, a focus state detection method is disclosed, wherein detection is made to determine whether the subject is a perspective conflict subject. The focus state detection algorithm is executed in the entire focus state detection region in the case of a normal subject. In the case of a perspective conflict subject, the focus state detection region is divided into a plurality of blocks in order to execute the focus state detection algorithm. Here, dividing into blocks is conducted by making a plurality of groups of initial terms k and the final terms r for the shift amount L=0 in the correlation algorithm of above-described formula 1. For example, as shown in FIG. 7a, in order to execute the focus state detection algorithm by dividing the pair of image arrays each comprised of forty-six data items into five blocks each composed of eight data items, the correlation amount C[L] is calculated from formula 1 by setting k=4 and r=11 for the shift amount L=0 in block 1. The shift amount ls is calculated from formulae 2 and 3 on the basis of these values. The defocus amount DF is calculated from formula 4. Similarly, the focus state detection algorithm is executed in blocks 2, 3, 4 and 5 by setting k=12 and r=19, k=20 and r=27, k=28 and r=35, and k=46 and r=43, respectively, for the shift amount L=0. Alternatively, it is possible to create larger blocks in the same pair of image sensor arrays than in the case shown in FIG. 7a. For example, the arrays may be divided into three blocks each composed of fourteen data items with block 1 being k=3 to r=16, block 2 being k=17 to r=30 and block 3 being k=31 to r=44, as shown in FIG. 7b. Hereinafter, the initial term k and the final term r will be called the leading data number and the final data number of the block, respectively.
As shown in FIG. 5b, when the image sensor arrays are divided into six blocks 1-6, the pattern for the primary subject P exists only in block 3. Therefore, it is possible to obtain a defocus amount relative to the primary subject P by executing the focus state detection algorithm on the basis of the sensor output from the block 3. In addition, it is possible to obtain a defocus amount relative to the background BL because the pattern for the background BL exists in the other blocks.
In focus state detection devices that divide the focus state detection region into blocks, there is a method for changing the width of the blocks. This method includes executing the focus state detection algorithm first with narrow blocks until focus state detection is impossible in all of the blocks. Subsequently, the blocks are enlarged, and the focus state detection algorithm is again executed.
In addition, because there are cases wherein focus state detection becomes impossible because the contrast in the subject is positioned at the boundary of blocks, a method is disclosed in U.S. Pat. No. 5,068,682 wherein the absolute value of the difference between adjacent data items near the boundary of the block is calculated. The boundary position moves so that the boundary of the block is the area where the absolute value of the difference is smallest.
However, with the block division in the above-described focus state detection device, a problem arises that accurate focus state detection results cannot be obtained relative to perspective conflict subjects.
In the subject example shown in FIG. 5a, the focus state detection region shifts slightly to the left in the drawing when the photographer changes the composition. When this occurs, the pair of data items becomes as shown in FIG. 5c. The patterns for both the background BL and the primary subject P are intermixed in blocks 2 and 3. Because the subject in blocks 2 and 3 is a perspective conflict subject, it is impossible to obtain a defocus amount relative to the primary subject P.
In this way, through division of the focus state detection region into blocks, certain blocks achieve perspective conflict states, making focus state detection impossible, while conversely focus state detection becomes possible relative to a certain subject when the perspective conflict is resolved.
In order to solve this problem, a method has been considered wherein separate blocks 6-9 are added, which overlap two blocks each out of the division into blocks 1-5. The focus state detection algorithm is executed in all blocks 1-9, and focus state detection is prevented from becoming impossible through perspective conflict. Additionally, a method has also been considered wherein focus state detection is prevented from becoming impossible through perspective conflict by making the block division finer.
However, the former method has the disadvantage that the volume of the focus state detection algorithm is increased by the number of added blocks, while the latter method has the disadvantage that the precision of the focus state detection algorithm drops because the number of data items from the image sensor arrays that comprise the blocks becomes smaller.