Color television camera systems have evolved over the years, but a recent challenge has been to develop a practical high definition television camera with improved performance that can overcome limitations of existing solid state image sensor (e.g. CCD) technology.
It is known that an electronic video signal (television signal) can be encoded at reduced bandwidth by lowering the frame refresh rate of the high spatial frequency components, while maintaining the frame refresh rate of at least a portion of the low spatial frequency components at the standard rate. If done in a particular manner, this will not cause substantial degradation in the ultimately displayed image, since human vision cannot perceive changes in high spatial resolution information at as fast a rate as it can perceive changes in low spatial resolution information. Accordingly, as has been previously set forth, an electronic video encoding and decoding system can be devised which takes advantage of this, and other, characteristics of human vision by encoding higher spatial resolution video components to be at a temporal information rate which approximately corresponds to the highest rate actually perceived by human vision for such components; thereby eliminating the need to encode these components at a higher rate, which inherently wastes bandwidth.
The foregoing principle was used to advantage in my U.S. Pat. No. 4,652,909, which discloses a video camera system and method with a high definition capability. First and second video imaging devices are provided. Optical means are provided for directing light from the scene to both of the first and second video imaging devices, such that they receive substantially the same optical image. Means are provided for scanning the first video imaging device at a relatively fast scan rate, typically, but not necessarily, a conventional 30 frames per second. Means are also provided for scanning the second video imaging device at a relatively slow scan rate, preferably not greater than 15 frames per second. In this system, as indicated, the top octave (detail) information can be scanned at 15 frames per second and the other (lower spatial frequency) information scanned with a standard television camera at 30 interlaced frames per second.
Another prior art approach of interest is the four tube (or sensor) color camera. In this camera system, a fourth tube is used to derive luminance while the three other tubes in the camera derive the color information. All tubes were scanned interlaced at 30 frames per second.
Interlaced scanning was developed many years ago as a way of displaying images without flicker at a frame rate of 30 frames per second. The 1/60 second integration time on the face of the interlaced camera was short enough to prevent "motion blur" of moving objects. Interlace can be visualized as a form of subband coding compression. It updates low spatial frequency information at 60 fields per second but requires 1/30 second to produce detail. Because of the longer integration time of the human visual system for detail information, this system works reasonably well. However, it has several artifacts resulting from the "compression". One such artifact relates to the fact that the eye frequently scans vertically at the interlace "line crawl" velocity. During this time the field line structure is clearly visible and objects moving at that velocity are displayed with half as many scan lines. A more objectionable artifact is interline flicker. Detail information near the Nyquist limit of resolution produces a "moire" beat at low spatial frequencies that flickers at 30 Hz. The visual system has good temporal response for low spatial frequencies. For this reason, displays that have considerable information near the Nyquist limit of resolution (e.g. some computer displays) use progressive rather than interlaced scanning.
Interlace format has other recognized disadvantages. Interlace has been a difficult signal to process for compression systems and is a nuisance for electronic video post production. It has been a desire for the computer industry, compression researchers, and the post-production houses to have a video system that has square pixels, progressively scanned at a 1920.times.1080 common image format. Displays have long been able to display this format. For example, active-matrix LCD displays are addressed at 60 FPS progressively in order to get good motion rendition and high brightness. It is only the lack of a practical camera and recording device that has limited the transition to progressive scan in the common image format.
Image sensors have become available (e.g. Eastman Kodak KAI-2090 and KAI-2091) that have 1920.times.1080 square pixels, with a 16:9 aspect ratio, that can be scanned (interlaced or progressive) at 30 frames per second at 1080 visible lines per frame, or scanned (interlaced or progressive) at 60 frames per second, two lines at a time, to give 540 visible lines per frame. The problem, however, is that the sensors are not capable of being scanned at 60 frames per second and 1080 lines per frame.
It is among the objects of the present invention to provide a method and apparatus for generating color video signals at increased line and/or frame rates, and above the scanning capability of the sensors used, and preferably in progressive scan format, but without introducing artifacts when images are produced from the video signals. It is also among the objects of the present invention to provide an improved color video camera with a minimum of video sensors.