The present invention relates to an image sensing apparatus comprising an electronic anti-vibration system which performs vibration correction by electrically extracting a sensed image based on vibration data of the image sensing apparatus main body. Particularly, the present invention relates to an image sensing method and apparatus capable of sensing a moving image and a still image, and a storage medium storing control programs for controlling the image sensing apparatus.
Conventional image sensing apparatuses such as a video camera or the like have achieved automatic functions and multi-functions in various aspects, e.g., automatic exposure (AE), automatic focus (AF) and so forth. By virtue of this, high quality image sensing is possible.
Furthermore, as the sizes of image sensing apparatuses become smaller and magnifications of optical systems become higher, vibration of the image sensing apparatus causes deteriorated sensed image quality. To compensate for such influence of vibration of the image sensing apparatus, various image sensing apparatuses having a function for correcting vibration have been proposed.
FIG. 1 is a block diagram showing a construction of a conventional image sensing apparatus having a vibration correction function.
The image sensing apparatus having the vibration correction function shown in FIG. 1 comprises an image sensing unit 401, a vibration correction section 402, a signal processor 403, a recording section 404, a vibration detector 405, a DC cut filter 406, an amplifier 407, a correction variable calculator 408, a read-out controller 409, and a timing generator 410.
The image sensing unit 401 comprises a lens 401a, and an image sensing device 401b such as a CCD or the like.
The vibration correction section 402 is provided to correct image signals, obtained by the image sensing device 401b, in order to reduce vibration components of a moving image influenced by unsteady hands or the like. The vibration correction section 402 comprises a first delay section 402a serving as delay means for one pixel, a first adder 402b, a second delay section 402c serving as delay means for one image line, and a second adder 402d. 
The signal processor 403 converts electric signals, where vibration components are reduced by the vibration correction section 402, to standard video signals such as NTSC signals or the like. The recorder 404, comprising a video tape recorder or the like, records the standard video signals converted by the signal processor 403 as image signals.
The vibration detector 405 comprises an angular velocity sensor such as a vibration gyro sensor or the like and is installed in the image sensing apparatus main body.
The DC cut filter 406 cuts off the DC components of the angular velocity signal, outputted by the vibration detector 405, and passes AC components only, i.e., vibration components. The DC cut filter 406 may employ a high-pass filter (hereinafter referred to as HPF) for cutting off signals of a predetermined band.
The amplifier 407 amplifies the angular velocity signal outputted by the DC cut filter 406 to an appropriate level.
The correction variable calculator 408 comprises, e.g., a microcomputer or the like, which includes an A/D converter 408a, high-pass filter (HPF) 408b, integrator 408c, and pan/tilt determination section 408d. 
The A/D converter 408a converts the angular velocity signal outputted by the amplifier 407 to digital signal.
The high-pass filter 408b, having a variable cut-off frequency, cuts off low frequency components of the digital signal outputted by the A/D converter 408a. 
The integrator 408c outputs an angle displacement signal by integrating the signals (angular velocity signal) outputted by the high-pass filter 408b, and has the function to vary characteristics in an arbitrary frequency band.
The pan/tilt determination section 408d determines panning or tilting based on the angular velocity signal, the angular velocity signal outputted by the integrator 408c, and an integrated signal, i.e., angle displacement signal, on which integration process has been performed on the angular velocity signal. The pan/tilt determination section 408d performs panning control, which will be described later, in accordance with the levels of the angular velocity signal and angle displacement signal.
The angle displacement signal obtained by the correction variable calculator 408 serves as a correction target value in the control described later.
The read-out controller 409 shifts a read-out start position of the image sensing device 401b in accordance with the vibration correction target value signal, and controls operation timing of the first delay 402a and the second delay 402c as well as an addition ratio of the first adder 402b and the second adder 402d. 
The timing generator 410 generates driving pulses for the image sensing device 401b, the first delay 402a, and the second delay 402c, based on the control data of the read-out controller 409. Driving pulses are generated for the image sensing device 401b according to the storage and read operation of the image sensing device 401b. For the first delay 402a and the second delay 402c, driving pulses which serve as a reference for delaying operation, and whose phases are coherent with the driving pulses of the image sensing device 401b, are generated.
The vibration correction comprises the vibration detector 405, DC cut filter 406, amplifier 407, correction variable calculator 408, read-out controller 409, and timing generator 410.
Next, operation of the pan/tilt determination section 408d of the correction variable calculator 408 will be described in detail.
The pan/tilt determination section 408d determines an angular velocity signal from the A/D converter 408a and an angle displacement signal from the integrator 408c. If the angle displacement signal, which is obtained by integrating the angular velocity signal, is larger than the predetermined threshold value, the section 408d determines that panning/tilting operation is being performed, regardless of whether or not the angular velocity is larger or smaller than a predetermined threshold value. In such case, the low cut-off frequency of the high-pass filter 408b is shifted higher to change the characteristics so that the vibration correction system does not respond to low frequencies. Further, in order to gradually shift a correction position of the image correction means to the center of the correction range, control is performed (hereinafter referred to as panning control) such that a time constant serving as an integration characteristic of the integrator 408c is reduced, so that an integral value by the integrator 408c becomes to a reference value (a value obtained if vibration is not detected).
Note that during the above operation, the angular velocity signals and angle displacement signals continue to be detected. When the panning or tilting is completed, the low cut-off frequency is reduced to expand the vibration correction range, then the panning control ends.
The panning determination operation will be described with reference to the flowchart in FIG. 2.
First in step S501, the A/D converter 408a converts (A/D conversion) an angular velocity signal amplified by the amplifier 407, to a digital value which can be operable by the correction variable calculator 408. Next in step S502, the high-pass filter 408b operates using the cut-off frequency value prepared in the previous step. Then in step S503, the integrator 408c executes an integration calculation using the time constant value prepared in the previous step. In step S504, the integration result obtained in step S503, i.e., an angle displacement signal, is converted from a digital value to an analogue value then outputted.
In step S505, it is determined whether or not the angular velocity signal is equal to or larger than a predetermined threshold value. If the angular velocity signal is equal to or larger than the predetermined threshold value, determination is made that a panning/tilting operation is being performed. Then in step S506, the cut-off frequency value fc used for the operation of the high-pass filter 408b is increased from the current value so that the signal in low frequency bands is attenuated in power. Next in step S507, the time constant value used in the integration is reduced from the current value by a predetermined value so that the angle displacement output becomes close to the reference value. Then, the control operation ends.
Meanwhile, if the angular velocity signal is not equal to or larger than the predetermined threshold value in step S505, it is determined in step S508 whether or not the integrated value is equal to or larger than a predetermined threshold value. If the integrated value is equal to or larger than the predetermined threshold value, determination is made that panning or tilting operation is being performed, and the control proceeds to step S506. Meanwhile, if the integrated value is not equal to or larger than the predetermined threshold value in step S508, determination is made that the camera is in a normal control state or panning/tilting operation has been completed. Then, the control proceeds to step S509.
In step S509, the cut-off frequency value used in operation of the high-pass filter 408b is reduced from the current value by a predetermined value so that the signal power in low frequency signals is more decreased. Next in step S510, the time constant value used in integration is increased from the current value by a predetermined value so that the integration are more effective. Then, the control operation ends.
By the above described control, the overflow of integration value exceeding a correction target value is prevented. As a result, the correction target value is kept stationary, and a stable anti-vibration control is achieved.
Next, the correction means in the above-described conventional example will be briefly described with reference to FIG. 3.
Referring to FIG. 3, reference numeral 601 denotes an entire image sensing area of the image sensing device 401b; 602, an extraction frame set in the entire image sensing area 601, where an image signal is actually converted to standard video signals and outputted; and 603, an object sensed by the image sensing device.
Standard video signal in this stage is seen as an image indicated as reference numeral 604. More specifically, reference numeral 604 denotes an image area on a monitor where the video signal is reproduced, and 603′ denotes the object reproduced in the image area 604 of the monitor. By extracting an image signal from a sensed image as will be described later, the periphery is removed from the entire image sensing area 601 and the remaining portion, that is an extracted image is outputted as standard video signals. By this, the image area 604 is reproduced on the monitor.
FIG. 3 shows a case where an operator, who performs image sensing of the object 603, shifts the image sensing apparatus in a lower left direction indicated by arrows 605, 605′, and 605″.
In this state, if an image is extracted by using an extraction frame 602′ which corresponds to the extraction frame 602, the obtained video signal would represent the object 603, which is shifted by the amount of vector indicated by the arrow 606. It is possible to obtain the image 604 if the extraction position is corrected and moved from the extraction frame 602′ to the extraction frame 602″, using an image displacement 607 obtained based on a vibration amount of the image sensing apparatus, i.e., vibration correction target value. By employing such principle, image vibration correction is realized.
Next, extraction of an image sensing area for correction will be described with reference to FIG. 4.
In FIG. 4, reference numeral 701 denotes the entire image sensing device; and 702, pixels constructing the entire image sensing device 701, each serving as a photoelectric converter. Based on electric driving pulses generated by a timing generator (not shown), charge storage and read-out controls are performed pixel by pixel. Reference numerals 703 and 704 are extraction frames similar to the extraction frame 602 shown in FIG. 3. This will be explained for a case where video signals are extracted with, e.g., the extraction frame 703.
First, starting from the pixel “S”, photoelectrically converted charges are read in the sequence of the arrow 706. The reading operation starts in concurrence with the synchronization period of output video signals. During the synchronization period, reading is performed at a faster transfer rate than the normal reading rate until the reading-out reaches the pixel preceding the pixel “A” by one pixel.
After the synchronization period, charges for a real image period are read starting from the pixel “A” to the pixel “F” at the normal reading rate, as a line of image data in video signals.
During the horizontal synchronization period, charges of pixels are read from the pixel “F” to the pixel before the pixel “G” at a faster transfer rate than the normal reading rate, preparing for reading the next image period. Starting from the pixel “G”, reading is performed in a similar manner to the reading from the pixel “A” to the pixel “F”.
By controlling the reading operation as described above, for instance, the central part of the image sensing device 401b can be selectively extracted from the entire image sensing area 601 of the image sensing device 401b, and video signals can be obtained.
Described next with reference to FIG. 4 is moving the extraction position, necessary in a case where the image sensing surface is shifted due to vibration on the image sensing apparatus as described in FIG. 3.
In a case where the movement of the object (vibration of the image sensing apparatus), which is equivalent to the arrow 705, is detected on the image sensing device 401b, the extraction frame 703 is changed to the extraction frame 704 so that an image without any object movement is obtained.
In order to change the extraction position, the aforementioned read-out start position is changed from the pixel “A” to the pixel “B”. By this change, it is possible to selectively extract a partial image from the entire image sensing area 601 of the image sensing device 401b, similar to the case where reading starts from the pixel “A”.
More specifically, photoelectrically converted charges are read from the pixel “S” in the sequence of the arrow 706, as similar to the case of reading the extraction frame 703. The reading operation starts in concurrence with the synchronization period of output video signals. During the synchronization period, reading is performed at a faster transfer rate than the normal reading rate until the reading reaches the pixel one-pixel prior to the pixel “B”. Then, in the real image period, reading starts from the pixel “B”.
As described above, a partial image sensing area, which is the periphery of the image sensing device 401b, is read in advance for the amount corresponding to vibration correction data during the synchronization signal period which does not appear in the real image period, and the part of the image sensing device 401b is selectively read out based on the vibration data. By this, it is possible to obtain video signals from which displacement caused by vibration on the image sensing apparatus is removed.
Next, pixel shifting operation by the first delay 402a, second delay 402c, first adder 402b, and second adder 402d will be described in detail with reference to FIG. 5.
FIG. 5 is a block diagram showing a construction of an image sensing apparatus for performing finer image correction by extracting an image from the image sensing device 401b. Components shown in FIG. 5, which are identical to that of the above-described conventional example in FIG. 1, have the same reference numerals.
Referring to FIG. 5, the first adder 402b comprises a first multiplier 801, a second multiplier 802, an adder 803, and addition controller 804. The second adder 402d comprises a first multiplier 805, a second multiplier 806, an adder 807, and addition controller 808.
The image extraction, as described with reference to FIG. 4 can only be performed in each unit pixel 702 of the image sensing device 401b. Therefore, the correction could not be made for a shift less than one pixel pitch. Therefore, the first delay 402a, second delay 402c, first adder 402b, and second adder 402d perform a fine pixel shifting which accompanies a correction calculation for an amount less than one pixel unit.
Referring to FIG. 5, reference numeral 809 denotes an input terminal for inputting a sensed image signal obtained by the image sensing device 401b. While the sensed image signal is inputted to the second multiplier 802 of the first adder 402b, the signal is inputted to the first multiplier 801 through the first delay 402a constructed by a CCD or the like. After the first multiplier 801 and the second multiplier 802 multiply the respective image signal by a predetermined multiplier, addition is executed by the adder 803. As a result, a horizontal pixel-shifting correction has been performed on the sensed image signal.
The sensed image signal, on which horizontal pixel-shifting correction has been performed, is inputted to the second multiplier 806 of the second adder 402d, and to the first multiplier 805 through the second delay 402c, which is constructed by a CCD or the like. The first multiplier 805 and the second multiplier 806 multiply the signals by predetermined multipliers. The adder 807 adds the outputs of the multipliers 805 and 806. Thus, the sensed image signal on which horizontal and vertical pixel-shifting correction has been performed is obtained. The image signals are encoded to video signals by the signal processor 403 in the next step.
The read-out controller 409 controls the timing of driving pulses generated by the timing generator 410 for performing image extraction control of the image sensing device 401b based on the correction target value signal which corresponds to vibration of the image sensing apparatus, which is generated by the correction variable calculator 408 shown in FIG. 1. Meanwhile, the read-out controller 409 controls the addition controller 804 of the first adder 402b and the addition controller 808 of the second adder 402d so as to perform addition operation at an appropriate addition ratio.
This addition operation will now be described in detail with reference to FIGS. 6A, 6B, 7A and 7B.
FIGS. 6A and 6B are explanatory views showing the process of the first delay 402a and the first adder 402b in unit of the pixels 702 of the image sensing device 401b. The pixels 702 of the image sensing device 401b are arranged in a regular order in the vertical and horizontal directions on the surface of the image sensing device 401b as shown in FIG. 4. FIGS. 6A and 6B only show the pixels arranged in the horizontal direction for explanatory purpose.
First, addition operation performed in pixel units shown in FIG. 6A is described.
A horizontal center position of the n-th pixel of the pixels 702 of the image sensing device 401b is referred to as a horizontal pixel center 901. Similarly, a horizontal center position of the (n+1)th pixel of the pixels 702 is referred to as a horizontal pixel center 902. FIG. 6A graphically shows a method of calculating an imaginary center of the n-th pixel and the (n+1)th pixel. The n-th pixel value which is multiplied by ½, denoted by numeral 903, the (n+1)th pixel value which is multiplied by ½, denoted by 904, are added, thereby obtaining a pixel value at the imaginary center 905 of the n-th pixel and (n+1)th pixel. Similarly, a value in which the (n+1)th pixel is multiplied by ½ is added to a value in which the (n+2)th pixel is multiplied by ½, thereby obtaining a pixel value at the imaginary center of the (n+1)th pixel and the (n+2)th pixel.
Next, another addition operation shown in FIG. 6B will be described. FIG. 6B adopts a different addition ratio from that of FIG. 6A, showing a case of executing an addition with a ratio of 7/10 to 3/10.
In FIG. 6B, as similar to FIG. 6A, the horizontal center position of the n-th pixel of the pixels 702 of the image sensing device 401b is defined as a horizontal pixel center 901, and the horizontal center position of the (n+1)th pixel of the pixels 702 of the image sensing device 401b is defined as a horizontal pixel center 902. In order to obtain an image having 7/10 of the n-th pixel value and 3/10 of the (n+1)th pixel value, a pixel value 903′ in which the n-th pixel is multiplied by 7/10 is added to a value 904′ in which the (n+1)th pixel is multiplied by 3/10, thereby obtaining an image 905′ having 7/10 of the n-th pixel and 3/10 of the (n+1)th pixel. Similarly, by adding a value in which the (n+1)th pixel is multiplied by 7/10 to a value in which the (n+2)th pixel is multiplied by 3/10, it is possible to obtain an pixel having 7/10 of the (n+1)th pixel and 3/10 of the (n+2)th pixel.
As described above, by adjusting the addition ratio of adding the n-th pixel and (n+1)th pixel, it is possible to obtain pixel data for an arbitrary position of a pixel in the horizontal direction. Note that assuming that an n-th pixel value is generally multiplied by k, the addition ratio is obtained by multiplying an (n+1)-th pixel value by (1−K) (where, 0≦K≦1). Where the multiplication ratio is set to 1:0 or 0:1, addition is not performed. Then, the normal read-out operation is performed.
FIGS. 7A and 7B are explanatory views showing the process of the second delay 402c and the second adder 402d in the horizontal line units of the image sensing device 401b. 
First, addition operation performed in horizontal line units shown in FIG. 7A will be described. The horizontal line is constructed by a horizontal array of pixels of the image sensing device 401b. Vertical center position of the n-th line is referred to as a vertical pixel center 1001. Similarly, the vertical center position of the (n+1)th line is referred to as a vertical pixel center 1002.
FIG. 7A graphically shows a method of calculating a pixel value at the center positioned between the n-th line and the (n+1)th line. A value 1003 in which the n-th line is multiplied by ½ is added to a value 1004 in which the (n+1)th line is multiplied by ½, thereby obtaining a line 1005. Similarly, a value in which the (n+1)th line is multiplied by ½ is added to a value in which the (n+2)th line is multiplied by ½, thereby obtaining the center image of the (n+1)th line and (n+2)th line.
Next, addition operation performed in horizontal line units shown in FIG. 7B will be described. FIG. 7B adopts a different addition ratio from that of FIG. 7A, showing a case of executing addition with a ratio of 7/10 to 3/10.
In FIG. 7B, similar to FIG. 7A, the vertical center position of the n-th line of the pixels 702 of the image sensing device 401b is referred to as a vertical pixel center 1001, and the vertical center position of the (n+1)th line is referred to as the vertical pixel center 1002. In order to obtain an image having 7/10 of the n-th line and 3/10 of the (n+1)th line, a value 1003′ in which the n-th line is multiplied by 7/10 is added to a value 1004′ in which the (n+1)th line is multiplied by 3/10, thereby obtaining a line image 1005′ having 7/10 of the n-th line and 3/10 of the (n+1)th line. Similarly, by adding a value in which the (n+1)th line is multiplied by 7/10 to a value in which the (n+2)th line is multiplied by 3/10, it is possible to obtain an image having 7/10 of the (n+1)th line and 3/10 of the (n+2)th line.
As described above, by adjusting the addition ratio of adding the n-th line and the (n+1)th line, it is possible to obtain line data for an arbitrary position in one line in the vertical direction.
Note that assuming that the n-th line is generally multiplied by k, the addition ratio is obtained by multiplying the (n+1)th line by (1−k) (0≦K≦1). If the multiplication ratio is set to 1:0 or 0:1, addition is not performed. Thus, the normal read-out operation is performed.
As set forth above, correction operation in pixel units is achieved by reading out data by the image sensing device 401b. Correction operation for an amount less than one pixel unit is achieved by performing pixel shifting. By this, excellent anti-vibration can be attained.
However, recently an image sensing apparatus sensing both a moving image and a still image has been proposed. Such image sensing apparatus also employs a similar anti-vibration system to the above. However, the anti-vibration system employed by the foregoing conventional image sensing apparatus realizes anti-vibration process by pixel shifting operation. In this case, data stored between pixels are added up, resulting in deteriorated resolution in images. Particularly in a case of sensing a still image wherein a correction between frames by image extraction is not important, a problem of deteriorating resolution in images (hereinafter referred to as a first problem) rather arises.
Next, a second problem of the conventional apparatus will be described.
As a conventional method of anti-vibration, the method disclosed in Japanese Patent Application Laid-Open No. 60-143330 is known.
The conventional method includes an optically and electronically vibration correction methods. According to the optical correction method, vibration is detected from an image sensing apparatus main body by a rotational gyro sensor, and based on the detected result, the optical systems including lens and the image sensing apparatus main body are controlled. According to the electronic correction method, signal transferring of an image sensing device is divided into a high-velocity transfer mode and a normal transfer mode, and the number of pixels transferred in high-velocity transferring is controlled. The above-described conventional examples employs the electronic correction method. The electronic correction method particularly provides an advantage of apparatus size reduction.
In the electronic anti-vibration method, the number of pixels transferred is controlled. In a case of employing an image sensing device reading vertically neighboring pixels simultaneously, the pixel area where signals are normally transferred is shifted at two pixel pitch. This limits a resolving power in anti-vibration.
For a method of improving the resolving power in anti-vibration, the method disclosed in Japanese Patent Application Laid-Open No. 3-132173 is known. According to this method, vertically neighboring two pixels which are read simultaneously by an image sensing device are shifted at one pixel pitch. Since pixels are shifted at one pixel pitch within an image area where signals are normally transferred, smooth pixel movement is achieved.
For instance, in a case of reading pixels, the conventional art reads pixel data at two pixel pitch as shown in FIG. 8A. When shifting read-out data, the conventional apparatus, instead of shifting pixels at two pixel pitch as shown in FIG. 8B, shifts image data at one pixel pitch, as shown in FIG. 8C. By this, a pixel area normally transferred can be shifted at one pixel pitch. In this, regard, since the pixel combinations are different, color process has to be changed later to perform color processing according to the combination of pixels.
Note that in FIGS. 8A, 8B and 8C, Y denotes an yellow color filter; C, a cyan color filter; M, a magenta color filter; and G, a green color filter, Meanwhile, there is a pixel shifting technique available today in which image signals of the vertically neighboring two pixels are added in a predetermined ratio, and an image is shifted by changing the ratio, thereby improving a resolving power in anti-vibration. Herein, pixel shifting will be described with reference to FIG. 9
Referring to FIG. 9, first, an image signal for one horizontal scan period (H) is stored in 1-H delay memory 1701. A first multiplier 1702 multiplies currently inputted image signal by KV. A second multiplier 1703 multiplies the outputs of the 1-H delay memory 1701 by (1−KV). These multiplied values are added by an adder 1704. By this, when KV is 1, currently inputted image signal is outputted. When KV is ½, an intermediate image signal between the currently inputted image signal and the signal in the 1-H delay memory 1701 is outputted. When KV is 0, an image signal which is 1H prior to what is stored in the 1-H delay memory 1701 is outputted. By virtue of this, it is possible to shift positions of image signals. If the addition ratio is adjusted in small increments, smaller vibration can be accommodated.
However, in the above-described conventional example in FIG. 9, if anti-vibration operation is performed by shifting a pixel area where signals are normally transferred at one pixel pitch, the resolving power is not high enough to perform anti-vibration. As a result, very small vibration remains, or excessive vibration is caused on the contrary. Moreover, performing anti-vibration by pixel shifting causes a problem of “resolution unevenness” and flickers on a screen. Herein, the “resolution unevenness” will be described.
Since pixel shifting is performed by adding image signals of the vertically neighboring two pixels as described above, the resolution is deteriorated. Shown in FIG. 10 is a graph showing deterioration of resolution. The ordinate in FIG. 10 indicates an image position, and the abscissa indicates resolution. Reference numerals 1801 to 1805 denote image signals respectively.
As can be seen from FIG. 10, the highest resolution is achieved when pixel shifting is not performed. The resolution falls to the lowest level when pixels are shifted to an intermediate position of two pixels. Therefore, if pixel shifting to an arbitrary position is performed in anti-vibration control, resolution changes one after another, causing flickers on a screen. This is called “resolution unevenness.” The occurrence of the resolution unevenness is the second problem to be solved in the conventional art.