1. Field of the Invention
The field of invention relates generally to exposure control systems for imaging devices, and more specifically, exposure control systems of the type in which there is a need for a rapid and smooth response to exposure control commands.
2. Background of the Invention
In recent years, the need for miniature lightweight video cameras in the medical and industrial fields has developed. These cameras typically comprise a camera head containing an imaging device, such as a charged coupled device (CCD), a light source, a camera control unit containing control and video processing circuitry, and a display device, such as a computer monitor, television screen, or a microscope. The camera head is attached to the camera control unit via a cable or through a wireless interface, thereby allowing the camera head to be inserted into and positioned at remote and/or confined locations. Once the camera head is positioned, light from the light source is used to illuminate the location of interest, typically after passage through the cable or wireless interface. The light reflects off the location of interest to form images of the desired location. The images are captured by the imaging device in the camera head, and then displayed on the display device.
A medical application for such video cameras is endoscopy, in which an endoscope is passed through a small incision in the patient to permit viewing of the surgical sight. The endoscope is optically coupled to the camera head. Images of the surgical site are captured by the imaging device in the camera head, and displayed on the display device. Advantageously, the endoscope allows substantially non-invasive viewing of the surgical site.
Likewise, numerous industrial applications exist for video cameras such as this. In one such application, a video camera in combination with other tools allow work to be performed on areas that would otherwise not permit access. Examples include the use of miniature video cameras to view inaccessible piping networks situated behind drywalls and the like, interior locations on industrial equipment, and underwater locations in sunken ships or the like inaccessible by divers.
Additional details on endoscopic video cameras are contained in U.S. Pat. Nos. 5,841,491; 5,696,553; 5,587,736; 5,428,386; and co-pending U.S. patent application Ser. Nos. 09/044,094; 08/606,220; and 08/589,875; each of which is owned by the assignee of the subject application, and each of which is hereby fully incorporated by reference herein as though set forth in full.
A characteristic of many of these applications is a diverse and rapidly changing scene of interest. In endoscopic applications, for example, as the surgeon manipulates the endoscope, the scene of interest may rapidly change to encompass one or more bodily fluids or structures, including blood, which is light absorptive, moist tissue, which is light reflective, and other diverse body structures such as cartilage, joints, and body organs. The bright light sources typically used in such applications, coupled with the diverse and rapidly changing reflective characteristics of elements within the field of view, give rise to an illumination level of reflected light which changes rapidly and over a broad range. The result is that image capture devices such as the CCD can easily become saturated and over-exposed. Exposure control systems are thus needed to avoid saturation of the image capture device, to avoid overexposure and underexposure conditions, to deal with the diverse and rapidly changing reflection characteristics of the elements in the scene of interest, and also to ensure that the image capture device and related components are operating in optimal or preferred ranges.
Unfortunately, current exposure control systems react too slowly to the changing reflection characteristics, and develop unstable brightness fluctuations or oscillation if configured to run more quickly. It is not uncommon for these systems to take up to several seconds to react to overexposure and underexposure conditions during which the image is lost, and the scene is unviewable. This image loss makes these current systems unsuitable for endoscopic applications, in which any image loss poses an unacceptable health risk to the patient given that power tools, sharp surgical instruments, and electrosurgical devices can quickly damage healthy tissue if they are not continuously in view and controllable by the surgeon. Similar concerns are present in industrial applications in which the power tools utilized by the industrial operator may quickly damage the industrial work site.
The problem is compounded due to the nature of current image capture devices, such as CCDs, in which there is an inherent delay between the detection of a condition requiring a change in the exposure level of the device, and the responsiveness of the device to such a command. The problem can be explained with reference to FIG. 1, which illustrates a video camera system in which the image capture device is a CCD. The imaging system comprises sensor array 5 readout register 6, amplifier 7, video display device 8, and control device 1. Together, the sensor array 5 and readout register 6 comprise CCD or image sensor 9.
The sensor array 5 comprises a plurality of individual photo sites 14 typically arranged in rows and columns. Each site is configured to build up an accumulation of charge responsive to the incidence of illumination on the site. The geometry is such that the spatial distribution of charge build-up in the individual sites matches the spatial distribution of the intensity of light reflected from the scene of interest and defining the scene image. An image is captured when the charge is allowed to build up in the individual sites in the same spatial orientation as the spatial orientation of the reflected light defining the image.
Periodically, the accumulated charge in the individual sites is removed, and stored in readout register 6. Then, the contents of the readout register 6 are serially output onto signal line 15 in a manner which is consistent with the protocol of display device 8. The signal on signal line 15 is then provided as an input to display device 8. The output on signal line 15 comprises the output of image sensor 9.
In one implementation, a video frame must be presented to the display device 8 once every {fraction (1/60)} seconds, or 16.67 milliseconds (mS). A video frame in this implementation consists of 262.5 lines of pixels, with a line being presented every 63.6 xcexcS. According to this implementation, the accumulated charge in the individual sites 14 of sensor array 5 are removed to readout register 6, and the contents of the readout register 6 serially output on serial line 15, once every {fraction (1/60)} seconds.
Control device 1 controls the time period during which the individual sites 14 in sensor array 5 are allowed to accumulate charge during a frame. In one implementation, this is accomplished as follows. Control device 1 determines a control parameter equal to the number N of lines in a frame that the individual sites are to be kept in a discharged state. It then sends over control line 2 a plurality of discharge pulses in which the number of pulses is N, and the duration of the pulses is Nxc3x9763.6 xcexcS. The remaining portion of the frame, known as the integration time or integration time period, is the time period over which the individual sites are allowed to accumulate charge. Since the frame time is 16.67 mS, and the time per line is 63.6 xcexcS, the integration time in milliseconds per frame is 16.67xe2x88x92Nxc3x9763.6xc3x9710xe2x88x923.
The situation is illustrated in FIG. 2. FIG. 2(a) illustrates a timing pulse which, in accordance with the NTSC standard, occurs every {fraction (1/60)} sec., in which each occurrence of the pulse defines a separate frame capture. These timing pulses define separate frame periods. Indicated in the figure is the capture of frames 1, 2, and 3. FIG. 2(b) illustrates the discharge pulses which are sent to the sensor array 5 by control device 1 over control line 2. As indicated, for frame 1, N1 discharge pulses are sent; for frame 2, N2 discharge pulses are sent; and for frame 3, N3 discharge pulses are sent. Also indicated are the integration times for each frame. For frame 1, the integration time is xcfx841; for frame 2, the integration time is xcfx842; and for frame 3, the integration time is xcfx843. Although the frame periods are shown as adjacent to one another, in practice, as one of skill in the art would appreciate, the frame periods are separated by intervening vertical blanking intervals (not shown in FIG. 2).
FIG. 2(c) illustrates the average charge build-up in the individual sites for each frame, that is, the charge build-up for a site averaged over all or substantially all of the individual sites or pixels. As indicated, for frame 1, the average charge build-up is Q(xcfx841); for frame 2, the average charge build-up is Q(xcfx842); and for frame 3, the average charge build-up is Q(xcfx843).
FIG. 2(d) illustrates a signal representative of the average intensity of the frames displayed on display device 8. The average intensity of the first frame is related to Q(xcfx841); that of the second frame is related to Q(xcfx842); and that of the third frame is related to Q(xcfx843).
As can be seen by comparing FIGS. 2(a) and 2(d), there is a frame period delay, which in this implementation, is a {fraction (1/60)} sec. time period delay, between the time a frame is captured and the time the frame is displayed. Consequently, there will be a two frame delay between the time a condition is detected warranting a change in the CCD integration time, such as an overexposure or underexposure condition, and the time a frame reflecting the changed integration time is displayed. To see this, consider a first frame period in which the control unit 1 detects an overexposure or underexposure condition, or a changing intensity scene warranting a change in the CCD integration time. Since frame capture may have already begun during the first frame period, the earliest that a command can be effective to change the integration time is the next successive frame period. Moreover, because there is a frame period delay between the time a frame is captured, and the time the frame is displayed, there will be another frame period delay before the frame reflecting the changed integration time can be displayed. The end result is a two frame period delay. This delay compounds the difficulty of rapidly responding to overexposure or underexposure conditions, or a changing intensity scene.
In U.S. Pat. No. 5,162,913, Chatnever, et al., entitled xe2x80x9cApparatus for Modulating the Output of a CCD Camera,xe2x80x9d a method and apparatus for automatic exposure control is proposed in which a trial adjustment to the gain of an amplifier coupled to the output of the CCD is determined. This trial adjustment attempts to move the intensity of image represented by the signal output from the amplifier to an ideal value. Before, however, the trial adjustment is implemented, it is compared with the optimal range of the amplifier, in one implementation, 0-6 dB. If the trial adjustment exceeds the optimal range of the amplifier, it and the CCD integration time are adjusted or readjusted in opposite directions to allow the amplifier to operate within its optimum range. When the adjustment to the CCD integration time is reflected in the output of the amplifier, the trial adjustment is made to the amplifier gain.
The problem with this approach is that the adjustment to the amplifier gain must be deferred for two frames until a frame reflecting the corresponding integration time is displayed. The result is that the condition warranting a change in intensity level is allowed to continue into the next frame. The camera is thus unable to quickly and smoothly respond to scene and illumination changes.
Therefore, there is a need for an exposure control apparatus and method that, compared to current approaches, more rapidly responds to conditions warranting a change in exposure level.
There is also a need for an exposure control apparatus and method that, compared to current approaches, more smoothly responds to conditions warranting a change in exposure level.
There is also a need for an exposure control apparatus and method that overcomes the disadvantages of the prior art.
The objects of the subject invention include fulfillment or achievement of any of the foregoing objects, alone or in combination. Additional objects and advantages will be set forth in the description which follows, or will be apparent to those of ordinary skill in the art who practice this invention.
The present invention comprises a method and apparatus for automatic exposure control of an imaging device. In one embodiment, a device configured in accordance with the subject invention is configured to measure a brightness parameter of an image, and, based thereon, adjust the imaging device integration time and amplifier gain to bring the brightness parameter to a desired level. Unlike current approaches, the adjustment to amplifier gain is not deferred until the corresponding adjustment to integration time is reflected in the output of the amplifier. The result is a more rapid and smooth response to exposure control commands.
In one embodiment, during or upon completion of a first frame period, the desired effective exposure is derived from the measured brightness parameter in relation to a desired effective exposure. In one implementation, the brightness parameter is average luminance of the image represented by the signal output from the amplifier, but brightness parameters relating to peak luminance, or peak or average chrominance, are also expressly contemplated. The desired effective exposure is a composite measure of the effective exposure of the imaging device which is needed to bring the measured brightness parameter in line with a desired brightness parameter. It relates both to the integration time of the image sensor, and the gain of the amplifier coupled to the output of the sensor. It reflects the fact that adjustments to either can impact the brightness parameter of the signal output from the amplifier. More specifically, it reflects the fact that the brightness parameter of the amplifier output can be increased through an increase in the amplifier gain or the integration time of the image sensor, and that an increase in one or the other of these two values can be offset by a decrease in the other. In one implementation, the desired effective exposure is derived from the product of the gain of the amplifier and the integration time of the image sensor.
In one implementation, a brightness parameter ratio is derived from the ratio of the desired brightness parameter to the measured brightness parameter. Optionally, the ratio is subject to further processing, such as filtering, clipping, and/or hysteresis processing steps designed to remove high frequency noise, prevent large and rapid changes to the ratio, and avoid minute changes to the ratio which could cause undesirable hunting behavior and oscillations.
According to this implementation, the desired effective exposure is derived from the product of the brightness parameter ratio (after the optional processing described above) and the desired effective exposure of a second previous frame. In one implementation example, the first and second frames are successive frames such that the second frame directly or immediately precedes the first frame.
In one embodiment, during the first frame period, the amplifier gain determined in or upon completion of the second previous frame is applied by the amplifier to the output of the image sensor, and the integration time period determined in or upon completion of the second previous frame period is applied to image capture. In this embodiment, during or upon completion of the first frame period, the gain and integration time to be applied in a subsequent third frame period are also determined. In one implementation, the gain to be applied in the third subsequent period is derived from the ratio of the desired effective exposure determined in or upon completion of the first period to the integration time period applied to image capture in the first frame period. Also in this implementation, the integration time to be applied to image capture in the third frame period is derived from the ratio of the desired effective exposure determined in or upon completion of the first frame period to a nominal gain determined so that the amplifier is allowed to operate in a desired region of operation. In one implementation, the nominal gain is determined to minimize the introduction of noise into the signal output from the amplifier, while, at the same time, achieving acceptable operation under low light level conditions. In one implementation example, the first and third frame periods are successive frame periods such that the third frame period is the immediate or direct successor to the first frame period, with an intervening vertical blanking interval.
If necessary, the gain and integration time to be applied in the third period are then stored. Upon the occurrence of the third period, the gain is retrieved and applied by the amplifier to the output of the image sensor in the manner previously discussed. In addition, the integration time period is retrieved and applied to image capture in the manner previously discussed. The foregoing process then repeats itself. In one implementation, this process occurs every or substantially every frame period.
In the foregoing embodiment, since the adjustment to amplifier gain is not deferred until the corresponding adjustment to integration time is reflected in the output of the amplifier, a more rapid and smooth response to a condition requiring a change in desired effective exposure can be achieved compared to current approaches, such as that described in U.S. Pat. No. 5,162,913. If the gain adjustment causes the amplifier to operate outside a desirable range of operation, this condition is short-lived and is eliminated when the adjustment to integration time is reflected in the output of the amplifier. At that point, the amplifier gain can be returned to a nominal value selected to permit the amplifier to operate in a desired region of operation.
In one implementation, the gain and integration time period adjustments are determined and controlled by a processor, such as a microcomputer, microprocessor, or digital signal processor, configured to execute software code embodying the subject invention stored in a computer readable medium such as memory accessible by the processor. In another implementation, the gain and time period adjustments are determined and controlled by analog circuitry configured in accordance with the subject invention. In one implementation example, the analog circuitry comprises a brightness parameter module for determining the brightness parameter ratio; a desired effective exposure module for determining the desired effective exposure responsive to the brightness parameter ratio after optional filtering, clipping, and hysteresis processing steps; an integration time module for determining the integration time period for image capture; and a gain module for determining amplifier gain responsive to the integration time period determined in a previous frame.
Related systems, apparatus, computer readable media, and methods are also provided.
It is contemplated that the subject invention can be beneficially employed in any application in which the need to rapidly and smoothly adjust the exposure of a video camera is desirable. Examples of these various applications include but are not limited to medical endoscopy, underwater remote imaging devices, military or law enforcement video systems, and night vision systems.
Other applications, and other advantages of the present invention, will be apparent after reading the detailed description that follows.