1. Field of the Invention
The present invention relates to an image sensing device such as a video camera, an electronic camera and so on, in which the dynamic range of an image sensing element is expanded.
2. Description of the Prior Art
In general, the dynamic range of an image sensing element is decided by the proportion of a noise level to a saturation level in an output signal. It has been well-known in the art that a device to control an exposure to an image sensing element, such as a controller of intensity of illumination, amount of diaphragm, shutter speed, transparency of film and so on, enables one to obtain a plurality of pictures with different exposures from an object, combine the plural pictures, and obtain a picture with a wider dynamic range than that of the image sensing element itself (Rangaraj M. Rangayyan, Richard Gordon: "Expanding the dymanic range of x-ray videodensitometry using ordinary image digitizing devices", Applied Optics, Vol. 23, No. 18, pp. 3117-3120, 1984; Japanese Laid-Open Patent Application No. Sho 57-212448; Japanese Laid-Open Patent Application No. Sho 60- 52171; Japanese Laid Open Patent Application No. Sho 60- 52172; Japanese Laid Open Patent Application No. Sho 62-108678; Japanese Laid Open Patent Application No. Hei 1-99036; and Japanese Laid Open Patent Application No. Hei 2-100564).
As a concrete example, a Japanese Laid-Open Patent Application No. Hei 4-146404 discloses a charged coupled device (CCD) with an electronic shutter. The device obtains a set of two pictures from an object taken with two different exposures, or different exposure time, T1 and T2 controlled by shutter speeds. The reference is characterized in how to combine the two pictures, first one with the exposure T1 and second one with T2, to obtain a composite picture with an expanded dynamic range. The ways to combine the two pictures are described in the following two paragraphs.
FIG. 19(a) shows a relationship between a brightness, or an intensity of incident ray, on a receiving plane of the image sensing element and its output signal. In FIG. 19(a), the zero level is denoted by 0 and is defined by the output of the image sensing device at no incident ray, neglecting a noise, and the saturation level is denoted by Dsat. It is supposed that the output keeps linearity between 0 and Dsat for the intensity of incident ray and that the exposures T1 and T2 satisfy T1&lt;T2. D1 and D2 denote first and second gray values of the first and second pictures with the exposures T1 and T2, respectively. The gray value D1 is employed for a picture element, or pixel, in a bright area and D2 for a picture element in a dark area to obtain a composite picture with an expanded dynamic range compared with that of the image sensing element itself.
Specifically, as to each picture element, when the gray value D2 does not reach the saturation level Dsat, the gray value D2 is employed as a gray value of the picture element, and when it reaches the saturation level Dsat, the gray value D1 multiplied by a coefficient T2/T1 is employed as a gray value of the picture element. The reason for multiplying D1 by T2/Ti is to convert the first gray value D1 into a gray value corresponding to the sensitivity of the second gray value D2. Since the output signal from the image sensing element with the same intensity of incident ray is directly proportional to the exposure before it reaches the saturation level Dsat. D2 reaches to its Dsat earlier than D1 does at the same intensity of incident ray. The second gray value D2 with the exposure T2 is designed to be more sensitive than the first gray value D1. In other words, the sensitivity of D1 is T1/T2 times as high as that of D2. Therefore, the gray value D1 multiplied by T2/T1 corresponds to the gray value D2.
In some cases, however, a problem arises that discontinuity which does not inherently exist emerges on the critical part between D1 and D2 in the composite picture. This problem happens when the proportion between the sensitivity of D1 and D2 does not exactly correspond to the proportion rate T1/T2, because of the errors in the exposure and incomplete linearity of the image sensing element itself. The cause of this problem is that using D1 or D2 is critically selected in consideration of the critical point if D2 has reached the saturation level D.sub.sat or not.
In order to solve such a problem, Japanese Laid-Open Patent Application No. Sho 62-108678 suggests multiplying D1 and D2 by respective weight coefficients f(D1) and g(D2) as shown in FIGS. 20(a) and 20(b) and adding the both products. Accordingly, as shown in FIG. 20(c), the selection ratios of using D1 and D2 are gradually changed near the critical point, so that the selection will not be drastic.
Namely, a gray value D0 of a composite picture with an expanded dynamic range is calculated by the following equation; EQU D0=(D1.times.T2/T1).times.f(D1)+D2.times.g(D2) (1)
As a result, the picture with the expanded dynamic range can be obtained by gradually changing the coefficients f(D1) and g(D2) without discontinuity.
The picture with the expanded dynamic range calculated by Eq. 1 still has problems caused by the following three factors; characteristic change of camera under the influence of temperature, fluctuating intensity of illumination over the time on a picturing object, and movement of the picturing object.
The first problem is that temperature changes the characteristic of a camera affecting the relation between intensity of incident ray and an output signal of camera. Solid lines in FIG. 19(a) show their relation without any influence of temperature. However, temperature affects the relation as shown by the solid lines in FIGS. 19(b) and 19(c), which are deviated from the broken lines ignoring the influence of temperature. The deviation can be seen also in number. Gamma value of camera is 1.0 in FIG. 19(a), whereas in FIG. 19(b) and FIG. 19(c), it is 1.05 and 0.95. respectively. In the case of FIG. 19(a), when a picture with an expanded dynamic range is formed according to the above mentioned method, the relation between the composite gray value D0 and the intensity of incident ray exhibits as shown in FIG. 21(f). On the other hand, in the cases of FIGS. 19(b) and 19(c), where the characteristics of the camera are deviated from a standard characteristic, discontinuity will emerge on the lines as shown in FIGS. 21(g) and 21(h) which show relations between the composite gray value and the intensity of incident ray.
Such characteristic change of the camera causes a difference between the sensitivity ratio of D1 and D2 and the exposure ratio of D1 and D2 on a certain picture element in an unsaturated region. As a result, the relation between the weight coefficients f and g and the intensity of incident ray does not show linearity as shown by the broken line in FIG. 20(c), but fluctuates as shown by the broken lines in FIGS. 20(d) and 20(e).
Second, a problem similar to the first one arises when the pictures are taken with different exposures in order with the picturing object illuminated in fluctuating intensity over the time. A change of the intensity of incident ray causes an apparent difference between the sensitivity ratio of D1 and D2 and the exposure ratio of D1 and D2 on a certain picture element in an unsaturated region. Lines which show relations between the composite gray value D0 and the intensity of incident ray fluctuate as shown in FIG. 21(g) and FIG. 21(h), resulting in obtaining the composite picture with discontinuity.
Third, a problem similar to the first one arises when pictures of a moving object are taken with different exposures in order. Suppose movement of the picturing object causes a position shift on an image sensing element between the first picture and second picture as shown in FIG. 23(a). Such a position shift causes an apparent difference between the sensitivity ratio of D1 and D2 and the exposure ratio of D1 and D2 on a certain picture element in an unsaturated region, resulting in similar consequences to those of the first and second cases. FIG. 23(b) shows the composite picture with the expanded dynamic range in the case of FIG. 23(a), where spikes P1 and P2 emerge which do not inherently exist.
In view of these problems, it is an object of this invention to provide an image sensing device to form a picture with an expanded dynamic range in spite of a characteristic change of a camera, fluctuating illumination intensity over the time on a picturing object, and movement of the picturing object.