1. Field of the Invention
The invention is directed to an automatic focus adjustment apparatus for use in cameras and other optical devices. In particular, the invention is directed to a predictive focusing-type automatic focus adjustment apparatus that can detect and predict an object's movement to drive a shooting lens. The object's movement is predicted so as to maintain a focused state.
2. Description of Related Art
In single reflex cameras having an automatic focus adjustment function, also known as an auto-focus function, different kinds of predictive focusing methods or "tracking servo" methods have been proposed. These auto-focus functions detect that an object is moving, determine a defocus amount based on the object's movement, and then drive a shooting lens based on the determined defocus amount to maintain a focused state. For example, Japanese Laid-Open Patent Application Nos. 4-133016, 5-2127, 5-80235 and 1-107224 disclose such auto-focus functions.
Generally, the detection of a defocus amount is determined through a series of processes summarized as follows. A part of a beam of luminous flux is first transmitted through a shooting lens, and is then directed onto a charge accumulating type sensor (AF sensor), such as a CCD, in a focus detecting optical system. A charge corresponding to an illuminent distribution of an image is read after it is accumulated for an appropriate time period. The charge is then input to a microcomputer or CPU by being converted to digital data by an A/D converter. The CPU determines the focusing state by a predetermined algorithm. Then, a defocus amount is calculated as a relative distance between a film's surface and an image plane for the object.
Methods for calculating a defocus amount are known in Japanese Laid-Open Patent Application Nos. 58-142306 and 59-107313. In these publications, a defocus amount is determined as discrete data, which may be intermittently and periodically detected. The periods should preferably be at least 30 ms, because an accumulation time for an AF sensor generally requires, for example, 10 .mu.s to 100 ms, based on the illuminance of the object and because approximately 10 ms is used for the algorithm's calculation time. Therefore, calculating a moving speed of the image plane of the object from an intermittently detected defocus amount is an important aspect of the tracking servo method.
The inventor has developed an overlap servo-type automatic focus adjustment apparatus, which accumulates a charge in an AF sensor, and simultaneously drives a shooting lens as disclosed in Japanese Laid-Open Patent Application No. 4-133016 (JP016). A graph representing a method for calculating a image plane moving speed of an object according to JP016 is illustrated in FIG. 12.
FIG. 12 illustrates a graph depiction for detecting an image plane moving speed image plane of an object. In FIG. 12, horizontal axis t represents time, and vertical axis z represents distances on an optical axis. Line Q represents focusing positions proximate a film's surface. As the object moves, the focusing positions follow the object's movement in timed coordination with the object. Line L represents an actual image plane position of the shooting lens. Therefore, the difference between lines Q and L is a defocus size D. Defocus size D is directly measured.
Points, such as t(n), t(n-1), . . . , on the horizontal axis are midpoints of each accumulation time for the AF sensor. The accumulation time is a period of time represented between two vertical lines drawn to lines Q and L. (The vertical axis, horizontal axis, line Q and line L have the same meaning in other drawings in this disclosure.) The defocus amount at, for example, times t(n-1) and t(n) are D(n-1) and D(n), respectively, and further, the accumulation time for the AF sensor is t(n).
In FIG. 12, when defocus amount D(n) is obtained from a measurement at time t(n), a previous defocus amount D(n-1) is obtained at time t(n-1) , and the moving amount or distance M(n) of the shooting lens is obtained between times t(n) and t(n-1). Therefore, a movement amount P(n) of the image forming plane for the object from time t(n-1) to time t(n) is calculated according to Equation (1): EQU P(n)=D(n)+M(n)-D(n-1) (1)
The moving speed S (n) of the image plane for the object is calculated according to Equation (2): EQU S(n)=P(n)/{t(n-1)-t(n)} (2)
Defocus amount D is measured as a unit of distance along the optical axis. The movement amount of the shooting lens can be detected as an output pulse number of an encoder, which detects a rotation of a lens driving motor. The speed of the image plane for an object is obtained as a moving distance along the optical axis per unit time, and is accomplished by multiplying a proportional constant, which is determined from a shooting magnification of the shooting lens, by an output pulse number for the encoder. If the speed of the image plane for the object is obtained as an encoder pulse number per unit time, the defocus amount will be converted to a pulse.
However, errors can be caused by factors such as noise from the AF sensor signals or setting a depth for the distance or range measuring area for the object. These factors are included in an intermittently measured defocus amount. Since differences between two defocus amounts are calculated to determine a speed of an image plane for an object (Equation (1), errors in distance measurement greatly affect the defocus amount. In particular, if two defocus measuring times t(n) and t(n-1) are adjacent, computation results for the speed of an object image plane will be inaccurate and unstable due to these consecutive errors.
To overcome this problem, JP016proposes determining an image plane from a defocus amount determined by two non-adjacent time periods. The defocus amount, according to JP016, uses a time period prior to the present period. Thus, a new defocus amount is determined as described with reference to FIG. 13.
FIG. 13 is a graph describing a principle to detect an image plane speed for an object from a defocus amount D(n-2), which is two time periods before the instant period t(n) and the most instant focus amount D(n). In FIG. 13, the amount of movement P(n) of the object image forming plane is defined by Equation (3): EQU P(n)=D(n)+M2(n)-D(n-2) (3)
In Equation (3), M2(n) represents a distance that the shooting lens moves from time t(n-2) to time t(n).
From Equation (3), the moving speed S(n) of the image plane for the object is defined according to Equation (4): EQU S(n)=P(n)/{t(n)-t(n-2)} (4)
If the movement amount for the image forming plane is determined using a defocus amount and distance measurement time from prior time periods, the movement amount is relatively greater than compared to adjacent defocus amounts. This increases the accuracy of the measurements.
However, since responsiveness is decreased as an object speed varies over time, the method for determination should be selected depending on a size of the movement amount for the image forming plane and how many generations before the instant or present time were used to calculate the lens speed. In other words, when the speed of the image forming plane is fast, it is not necessary to use data from such prior time periods. However, when the speed of the image forming plane is relatively slow, it is useful to use data from prior time periods.
A time period of approximately 300 ms is useful for a responsive determination of a defocus amount. The possible number of times to measure a defocus amount during this time is dependent on several factors, including the object's illuminance and the calculation speed of the CPU. Therefore, appropriate measurement data should be selected from prior measurement data for this time range.
Japanese Laid-Open Patent Application No. 3-80235 (JP235) proposes to determine a linear regression diagram or curve, using backup data, including defocus amounts, and a measurement time up to when a release of the shutter is started. However, in JP235 an object is assumed to be stationary. In other words, JP235 assumes that a focusing position is fixed. Thus, a linear regression diagram or curve is calculated to predict time changes in response to a repeatedly measured defocus amount during the lens drive to focus a stationary object using the overlap servo. This predictive diagram or curve is constructed according to Equation (5): EQU y=a+bt (5)
If the defocus amount and accumulation time for the AF sensor are expressed as D(k) and t(k), respectively, the parameters a and b can be expressed according to Equations (6) and (7): EQU b={.SIGMA.t(k)-.SIGMA.t(k).multidot.D(k)/n}/{.SIGMA.t(k).sup.2 -.SIGMA.t(k).sup.2 /n} (6) EQU a={.SIGMA.D(k)-b.SIGMA.t(k)}/n (7)
where .SIGMA. sums the variables for k=1 to n.
FIG. 14 is a graph illustrating a principle for determining a linear regression diagram or curve. If a time period is determined according to Equation (5), the determined time becomes the most suitable time for exposing on a film. JP235 proposes that exposure is commenced at this time.
Japanese Laid-Open Patent Application No. 1-107224 (JP224) proposes to predict the movement locus for an image forming plane with a diagram or curve, such as a quadratic function rather than a linear function.
FIG. 15 is a graph illustrating a principle where a movement locus for the image forming plane is predicted according to JP224. The lens drive and calculation of the AF sensor accumulation for the defocus amount do not sequentially overlap in JP224. If DF 1 is a defocus amount at a previous time; DF 2 is a defocus amount immediately after the previous time; DF 3 is a defocus amount of the instant time; DL 1 is a lens drive amount between the previous time at DF 2 and one before the previous time at DF 1; and DL 2 is a lens drive amount between the instant time at DF 3 and the previous time at DF 2 ; the image plane can be expressed according to the quadratic equation (8): EQU x=at.sup.2 +bt+c (8)
where the parameters a, b, and c are determined according to Equations (9)-(11): EQU a={(DF3+DL2-DF2)/(TM1+TM2)TM 2}+{(DF1-DL1-DF2)/(TM1+TM2)TM1} (9) EQU b=(DF2+DL1-DF1-a.multidot.TM1.sup.2)/TM1 (10) EQU c=DF1 (11)
In JP224, during a normal auto focus lens drive, i.e., before a release sequence, a target for the lens drive is determined immediately after each defocusing amount has been detected. After a total time including a specified time that is applied for lens drive, and a delayed time that is known as a release time lag up to the film exposure in the release sequence, has elapsed, a focusing position for lens drive can be calculated according to Equation (8). In other words, by forcefully synchronizing a start of a release sequence immediately after completion of each lens drive and movement, the lens drive is controlled so the lens is positioned at the focusing position at the film exposure time. Therefore, the start of a release sequence will only be accepted when each lens drive has been completed. FIG. 15 graphically illustrates the above method.
Differences in measurement data, from prior time periods, are used to increase accuracy in the detection of an image plane speed, as explained in Japanese Laid-Open Patent Application No. 2-256677 (JP677). However, in JP677 some data that has been factored out, is not considered in the calculation. Thus, all useful information is not fully considered.
A method to determine a linear regression diagram or curve is disclosed in Japanese Laid-Open Patent Application No. 3-80235 (JP235). JP235 is considered to be accurate in that it considers information normally factored out, for example, in JP677. However, only stationary objects are disclosed in JP235, and JP235 is only used to obtain a most appropriate exposure timing, after the start of a release sequence. Additionally, JP235assumes that a driving speed of the shooting lens has a constant speed, and that times for distance measurement are always equal, thus resulting in poor reliability.
JP235 does not disclose basic concepts for predicting an image plane position using regression diagrams or curves for the tracking servo method before entering into a release sequence. Thus, JP235 is not effectively used for a lens drive with a moving object.
Japanese Laid-Open Patent Application No. 1-107224 (JP224), considers a point from which the image plane predicts a diagram or curve, the quadratic function of Equation (8), as the actual motion of the image plane. The image plane is not considered linear but quadratic. Therefore, predicting a focusing position may be more accurately determined using Equation (8) than by using the linear function of Equation (5), where the AF sensor accumulation and lens drive do not overlap. Further, the lens driving time, when the defocus amount is detected, is constant, regardless of the lens drive amount. The lens drive is completed within this driving time. Alternatively, the driving time is limited to a time that a normal lens drive could be completed.
Therefore, with JP224, a subsequent accumulation of the AF sensor is always delayed for a length of time to drive a lens. Furthermore, the number of times to detect the defocus amount per unit time is smaller than compared to a time where accumulation of the AF sensor and lens drive overlap.