1. Field of the Invention
The present invention relates to video cameras, and more particularly, to an automatic iris control circuit for automatically adjusting the aperture size to regulate the amount of light incident into the optical system of a video camera according to the incident amount of light.
2. Description of the Background Art
Referring to FIG. 1, a conventional video camera includes a lens 102 for bringing together incident light from an object to form an image on a predetermined image formation plane, an image sensing device 104 having an image sensing plane arranged on the image sensing plane for converting the optical image on the image sensing plane into a video signal which is an electrical signal by photoelectric conversion, an iris plate 103 for regulating the amount of light incident upon image sensing device 104, an iris motor 113 for driving iris plate 103, a preamplifier 116 for amplifying the video signal provided from image sensing device 104 and providing the same into a signal processing circuit (not shown) for converting into a television method, and an automatic iris control circuit 217 responsive to the video signal provided from preamplifier 116 for operating iris motor 113 so that the level of the video signal attains a predetermined level.
Automatic iris control circuit 217 includes a gain control amplifier (GCA) 206 for amplifying the video signal provided from preamplifier 116 corresponding to the central portion of the screen by a gain that is greater than that of the remaining portion of the screen, a voltage adder 207 for adding to the amplified video signal provided from GCA 206 sawtooth voltage that will raise the voltage in the lower portion of the screen, a detection circuit 208 for averaging the outputs of voltage adder 207, and a comparator 209 having a negative input terminal connected to the output of detection circuit 208 and a positive input terminal connected to a variable reference voltage (V.sub.IR) 210 for comparing the signal provided from detection circuit 208 and reference voltage 210 to drive iris motor 113 according to the comparison output.
Referring to FIG. 2, the amplification carried out in GCA 206 is for the purpose of giving a particularly great weight to the central portion 252 of a screen 251. When object 254 is located at the central portion 252, as shown in FIG. 3A, iris plate 103 is driven to obtain an appropriate image sensing state of object 254 owing to the amplification by GCA 206. Even if there is ground 255 in the background scenery with rear light, photometry is carried out giving weight to object 254 rather than to ground 255 or the background scenery.
Referring to FIG. 2 again, voltage adder 207 adds offset voltage to the lower region 253 of screen 251. In the case where sawtooth voltage such as that shown in FIG. 1 is added, the weight for control of iris plate 103 becomes greater as a function of distance towards the bottom of screen 251. Therefore, when object 256 is located with the sky as the background in screen 251 as shown in FIG. 3B, the diaphragm is adjusted to a value that can shoot object 256 in an optimum state without being affected by the luminance of the sky.
Referring to FIG. 1, a conventional video camera operates as follows. The incident amount of light from object 101 forms an image on the image sensing plane of image sensing device 104 by lens 102. Image sensing device 104 converts the optical image into a video signal by photoelectric conversion. The video signal is provided to preamplifier 116. Preamplifier 116 amplifies output of image sensing device 104 to provide the same to a signal processing circuit and the input of GCA 206.
As already described with reference to FIGS. 2 and 3A, GCA 206 amplifies the converted video signal corresponding to the central region of the screen by a gain that is greater than that of the surrounding region. The amplified signal is provided to voltage adder 207. As described with reference to FIGS. 2 and 3B, voltage adder 207 adds sawtooth voltage to the input signal so that the video signal representing the lower region of the screen becomes a greater value. This signal is provided to detection circuit 208, where the video signals from voltage adder 207 are averaged and provided to the negative input terminal of comparator 209. Comparator 209 compares the output of detection circuit 208 and reference voltage 210 to drive iris motor 113 according to the comparison output. Iris motor 113 drives iris plate 103 according to the drive voltage. The amount of light of the object image upon image sensing device 104 is adjusted by the open/close of iris plate 103.
The detailed operation of automatic iris control circuit 217 is as follows.
If the luminance of the object image is too bright, the amplitude of the image signal provided from preamplifier 116 becomes great. This increases the average voltage of the video signal provided from detection circuit 208. If the output voltage of detection circuit 208 becomes greater than reference voltage 210, the output of comparator 209 shifts to a low potential. Iris motor 113 responds to the output of comparator 209 to be operative to close iris plate 103. This reduces the incident of light from the object to image sensing device 104.
If the incident amount of light from the object to image sensing device 104 is low, an operation opposite to that described above is carried out. That is to say, the output voltage of detection circuit 208 decreases to become lower than reference voltage 210. The output of comparator 209 shifts to a high potential so that iris motor 113 operates to open iris plate 103. Thus, the incident amount of light to image sensing device 104 increases.
By the above-described operation of the automatic iris control circuit, the aperture size of iris plate 103 is adjusted to obtain a maximum luminance of an object located in the central and lower region of the screen. The operator of the video camera does not have to manually adjust the aperture size to obtain a desired shooting state.
As described above, a video camera having a conventional automatic iris control circuit employs center-weighted metering and foot-weighted metering. This is based on the typical shooting condition where an object is usually located at the central region of a screen, and that the sky is located above as the background with the object in the lower region. However, there are some cases where center-weighted metering and foot-weighted metering may not result in an optimum diaphragm.
Consider the case where the entire background of object 254 is of high luminance in screen 251, as shown in FIG. 3C. In this case, the luminance of the lower portion of screen 251 becomes high. If foot-weighted metering is employed, the diaphragm will be operated towards the closing direction. This means that the sensed image of object 254 becomes dark.
Consider the case where object 254 is not located in the center portion 252, as shown in FIG. 3C. In this case, most of central portion 252 becomes a high luminance portion. The diaphragm will be operated towards the closing direction. This will also result in a very dark sensed image of object 254.
The above-described object state is not so rare. This is often encountered when shooting at a ski resort, for example. In this case, the background scenery is snow, which has a very high luminance. The state such as shown in FIG. 3C may often be seen in ski resorts.
For users that often shoot at a ski gelande, optimum aperture value could not be obtained by automatic iris control with the conventional center-weighted metering or foot-weighted metering, resulting in an unsatisfactory picture. There are some video cameras that can adjust the diaphragm manually. However, it is very difficult to control the iris while shooting. There was a problem that control of the iris could not be carried out easily with the above-described conventional video camera.
A technique, not directed to a video camera, but to a still camera, is disclosed in Japanese Patent Laying-Open No. 2-96724 for controlling the camera according to the condition of the object.
Referring to FIG. 4, the still camera disclosed in Japanese Patent Laying-Open No. 2-96724 includes a lens 320, a diaphragm 319 provided in front of lens 320, and an in-focus mechanism 335 for focusing the optical image of an object at a predetermined image formation plane by moving lens 320 along the optical axis. The optical image of the object is provided to an amplifier 322 as data representing the luminance of the object for each photoelectric converted device by an image sensing device 321 formed of photoelectric conversion devices allocated in a matrix manner. The luminance information of the object amplified by amplifier 322 is A/D converted by A/D converter 323 to be provided to an operation circuit 324 as a stepped down luminance BV'. Operation circuit 324 is previously input with an aperture value AV.sub.0 representing the open aperture of diaphragm 319. Operation circuit 324 calculates and provides the actual luminance BV (=BV'-AV.sub.0) of the object from the two values of BV' and AV.sub.0. The output object luminance BV is provided to a multiplexer 328 and a frame memory 334. The operation of multiplexer 328 will be described afterwards.
Frame memory 334 stores object luminance BV according to the output of each photoelectric conversion device of image sensing device 321 for each photoelectric conversion device. Frame memory 334 is connected to a neuro-computer 325, to which the luminance information stored in frame memory 334 is supplied, and from which signal P.sub.xy representing the position of the main object of the video stored in frame memory 334 is output. Neuro-computer 325 is connected to a coefficient memory 326 for storing coefficient W.sub.ji to determine the operational process carried out by neuro-computer 325. Coefficient W.sub.ji is rewritten to obtain an appropriate output corresponding to the input in the learning process of neuro-computer 325.
The output of neuro-computer 325 is connected to one input of a selector 336- The output of selector 336 is connected to multiplexer 328. The output of an operation panel 327 is connected to the other input terminal of selector 336. The output of operation panel 327 is also connected to neuro-computer 325. Operation panel 327 is for the purpose of providing to neuro-computer 325 a signal tp.sub.i representing the location of the main object from the picture stored in the frame memory 334 at the time of the learning mode. Operation panel 327 includes a touch panel switch (not shown), for example, having a one-to-one correspondence to the position of the screen.
The user inputs the position of the main object on the screen while looking through the finder, whereby signal tp.sub.i representing the position of the main object is provided from operation panel 327 to neuro-computer 325. Neuro-computer 325 carries out operation according to the coefficient stored in coefficient memory 326 by the input provided from frame memory 334 to temporarily determine an output. Neuro-computer 325 also compares signal tp.sub.i provided from operation panel 327 with its own output to rewrite coefficient W.sub.ij in coefficient memory 326 so that the offset is minimized. By repeating such learning several times, an artificial neural network implemented with neuro-computer 325 and coefficient memory 326 is self-organized to provide an appropriate signal P.sub.xy according to the input from frame memory 334.
The output of operation panel 327 is also provided to selector 336. At the time of learning mode, selector 336 provides the output of operation panel 327 to multiplexer 328. At the time of automatic mode, selector 326 provides the output of neuro-computer 325 to multiplexer 328. Multiplexer 328 passes only output BV of the photoelectric conversion device corresponding to the main object of the screen designated by control signal P.sub.xy provided from neuro-computer 325 or operation panel 327. The passed output BV is provided to operation circuits 329 and 331.
Operation circuit 329 carries out operation for focus-detection according to the so-called hill-climbing method based on luminance output BV of the photoelectric conversion device corresponding to the main object. The output of operation circuit 329 is provided to a driver 330. Driver 330 moves in-focus mechanism 335 according to the supplied operation result to move lens 320 in its optical axis direction. By operation circuit 329, lens 320 stops at a location so that an image is formed on the light receiving plane of image sensing device 321.
Operation circuit 331 determines the shutter speed or the signal value according to luminance BV of the main portion of the object provided from multiplexer 328, film sensitivity SV, aperture value AV of diaphragm 319, and the set shutter speed TV. The determined shutter speed and aperture value are provided to a shutter control device 332 and an iris control device 333, respectively.
Thus, the still camera disclosed in Japanese Patent Laying-Open No. 2-96724 has the aperture value, the shutter speed, and the in-focus position determined according to the luminance information of not the entire object, but only the main portion of the object. The position of the main object is detected by neuro-computer 325. This is carried out according to the learning process specified by the user through operation panel 327. Therefore, the detection of the position of the main portion of the object can be carried out similarly as to the liking of the user. It is described in the aforementioned Japanese Patent Application that the main object can be photographed under an appropriate shooting state by controlling the camera according to the luminance of the main object.
However, the technique of Japanese Patent Laying-Open No. 2-96724 is directed to a still camera. There is no disclosure as to how this technique is applied to a video camera. Although it is suggested in Japanese Patent Laying-Open No. 2-96724 that it is possible to provide as a teacher signal an exposure correction signal according to the brightness or the luminance pattern of the object to such a neuro-computer for carrying out learning such as retrogressive correction, no specific structure is taught.
The still camera described in the embodiment of the Japanese Patent Application which is controlled according to the luminance information of only the main object has the following problems which will be described hereinafter. The relation between the luminance information of the background scenery excluding the main object and the luminance signal of the main object is critical in obtaining an optimum aperture value. However, it is impossible to optimize the luminance balance between the main object and the background with the technique disclosed in Japanese Patent Laying-Open No. 2-96724. This is not a problem in practice for a silver salt camera represented by a still camera since luminance adjustment can be carried out at the time of printing. However, for a video camera, there is a limitation in carrying out luminance adjustment at the time of reproduction. The technique disclosed in Japanese Patent Laying-Open No. 2-96724 cannot be applied to a video camera.
There is also another problem. It is general to use the output of the image sensing device to obtain a video signal for the control of the diaphragm in a video camera. The number of photoelectric conversion devices that are allocated in the image sensing device is significant to comply with the high requirements of the picture quality. If the technique disclosed in Japanese Patent Laying-Open No. 2-96724 is applied to a video camera and the output of each photoelectric conversion device is stored in a frame memory to be provided to the neuro-computer, the number of inputs for the neuro-computer will become too great and not appropriate for practical use. A great number of inputs for the neuro network will result in a problem of a longer time period for the learning process of the neural network. Furthermore, if the output of the photoelectric conversion device is directly input to the neural network, the input value to the neural network will vary greatly in response to just a slight change in position of the object, resulting in an unstable operation of the neural network.