1. Field of the Invention
The present invention relates to a digital image recording/reproducing apparatus. More particularly, the present invention relates to a digital image recording apparatus for recording an image taken via a solid state imaging device and to a digital image reproducing apparatus for reproducing a recorded image.
2. Description of the Related Art
In the technology of digital image recording/reproducing apparatus such as an electronic still camera, it is known in the art to separate the record still image data taken via a solid state imaging device into luminance and chrominance signals and then record the data on a recording medium. The recorded still image data may be reproduced by a reproducing circuit and displayed on a screen of a monitor or the like.
An example of such an electronic still camera is disclosed in Japanese Patent Publication No. HEI 2-2353 (1990). In this technique, the electronic still camera includes: an image taking element for taking an image of an object and generating a color video signal corresponding to the object wherein the color video signal includes a chrominance signal having a first resolution; a display element capable of displaying an image in response to a chrominance signal having a second resolution lower than the first resolution; and a video signal output control element having first and second output modes such that, in a first output mode, the color video signal generated by the image taking element is outputted to an element other than the above-described display element while, in a second output mode, a signal representing the brightness included in the color video signal generated by the image taking element is outputted to the display element, wherein in the first output mode the signal representing the brightness included in the color video signal generated by the image taking element is outputted to an element other than the above-described display element so that the signal representing the brightness has the first resolution, while in the second output mode the resolution of the signal representing the brightness included in the color video signal generated by the image taking element is converted from the first resolution to the second resolution and then the resultant signal is outputted to the above-described display element.
As for a recording medium for use in an electronic still camera (hereafter referred to simply as a camera), a built-in memory such as an SRAM or a flush memory, or a solid-state memory card is now widely used. Various digital recording techniques for the electronic still cameras are proposed.
The advent of the digital recording technique has made electronic still cameras popular as an input device for inputting image data to a personal computer.
A solid state imaging device of the type widely used in a video camera of an NTSC monitor system is often employed in the electronic still camera. A great number of solid state imaging devices of this type are produced, and thus the employment of such devices leads to a reduction in cost.
In the standard VIDEO-1 regarding the digital still cameras prescribed by JEIDA (Japan Electronic Industry Development Association), the pixel number of one image is defined as 768 (horizontal)×480 (vertical) so that the pixel number may be compatible with that of the solid state imaging device of the above-described type.
In the solid state imaging device used in a video camera, while the number of pixels along a vertical direction is determined according to the standard such as the NTSC standard, the number of pixels along a horizontal direction may be set to a desired arbitrary value.
In many solid state imaging devices, to achieve a high horizontal resolution, the length of one image frame along its horizontal direction is set to a value WA smaller than the length HA along the vertical direction as shown in FIG. 33A, such as:HA:WA=1:(n/m)where n<m.
If the sampling number is determined according to the pixel configuration of a solid state imaging device defined, for example, by the standard VIDEO-1, the aspect ratio or the ratio of the vertical length to the horizontal length of one pixel corresponding to each unit of digital data has a value other than 1 (unity).
On the other hand, in a display device such as a monitor used in a personal computer system, the aspect ratio of each pixel is equal to unity as shown in FIG. 33B. That is:HB:WB=1:1where HB is the horizontal length and the WB is the vertical length of one pixel. This element is such that each pixel has a square shape in this case.
When image data recorded in digital form having an aspect ratio different from unity such as that shown in FIG. 33A is input to a personal computer, if the image data is displayed directly on a monitor screen consisting of pixels having a square shape such as that shown in FIG. 33B, the ratio of the horizontal length to the vertical length of the resultant image will be greater than that of the original image taken by a camera.
One known technique to solve such a problem is to display an image after converting the image data by a software program provided in the personal computer so that the pixel has a correct aspect ratio. However, this technique needs a long time to display a frame of the image, ranging from a few seconds to a few tens of seconds depending on the processing speed of the software which is, in turn, dependent on the processing ability of a CPU used in the personal computer.
In another known technique, a camera equipped with a solid state imaging device having a pixel aspect ratio equal to unity (hereafter such a pixel will also be referred to as a square pixel) is employed. In this technique, it is possible to directly display an image on a monitor screen without having to convert the pixel aspect ratio via a personal computer. In this case, the pixel numbers along the horizontal and vertical directions are selected as 640×480 elements according to the VGA standard widely employed in personal computers.
However, the solid state imaging device of this type is generally designed for use in conjunction with a monitor display of a personal computer, and no consideration is taken into account in the design for use in conjunction with a TV monitor device according to, for example, the NTSC standard. Although it is possible to reproduce an image taken via such a solid state imaging device based on the VGA standard on a TV monitor, the horizontal resolution is not good enough since the number of pixels along the horizontal direction is less than the required value or 768.
To achieve a higher resolution while maintaining the aspect ratio at 1:1, it has been proposed to employ a solid state imaging device having a greater number of pixels such as 1240×1024 pixels.
However, to directly display an image taken via such a high-density solid state imaging device on a TV monitor, the TV system is required to have the capability of dealing with such great numbers of pixels along the vertical and horizontal directions, and thus the TV system is required to be of the high-density type such as a high-vision system. This results in an increase in system cost. Another problem in this technique is that it is required to convert image data from digital form to analog form at a very high clock rate, and complicated and difficult techniques are required in processing circuits.
Thus, when an image taken via a solid state imaging device having such a great number of pixels is displayed, for example, on a TV monitor according to the NTSC standard, it is required to select a limited area of the entire image as shown in FIG. 34 so that the selected area is displayed on the screen of the TV monitor having a smaller number of scanning lines than that of solid state imaging device. For example, it is required to select a 640×480 area within the total image area consisting of 1240×1024 pixels.
However, the difference between the display area and the area of the image taken via the solid state imaging device causes difficulty in knowing whether an image is being recorded in a manner suitable for reproducing the image on a monitor. Furthermore, in an operation of taking an image, it becomes impossible to obtain full-area framing using a monitor element such as an electronic viewfinder (hereafter referred to as an EVF). Thus, this technique is very inconvenient in practical use.
Another problem of the above technique is that the production yield of solid state imaging devices having a great number of pixels is low and thus this type of solid state imaging device is very expensive, which results in a high cost of the total system.
Thus, as described above, various formats and techniques are used to record an image via a camera and to reproduce the recorded image on a monitor via a personal computer, as shown in FIG. 35.
In FIG. 35, a camera 101 is of the type having a solid state imaging device with a pixel aspect ratio other than 1:1 such as that shown in FIG. 33A. This camera 101 has the capability of displaying an image on a TV monitor 108.
A camera 102 is of the type having a solid state imaging device with a pixel aspect ratio equal to 1:1 such as that shown in FIG. 33B.
Both cameras 101 and 102 are designed to use a memory card 103 or 104 as a recording medium. These memory cards 103 and 104 are same in geometrical configurations and electrical specifications as that used in personal computer systems, and thus these memory cards 103 and 104 may be used by inserting either one into a memory card slot provided in a personal computer 105.
If the memory card 103 or 104 is inserted into the memory slot card of the personal computer, image data obtained via the camera 101 or 102 can be inputted to the personal computer via the memory card 103 or 104.
Either a software program 106 or 107 is selected depending on the recording format of the camera 101 or 102, and the image data input into the personal computer 105 is processed by the personal computer with the selected software program so as to display the image data on a monitor screen of the personal computer system.
To process the image data obtained via the camera 101, the software program 106 is required to convert the pixel aspect ratio as described above with the result that the software program 106 is poor in versatility compared with the software program 107 which is not required to perform pixel aspect ratio conversion.
As a result, although the memory cards 103 and 104 used by the cameras 101 and 102 are of the same type in geometrical configurations and electric specifications, these memory cards 103 and 104 cannot be used in an exchangeable manner since image data is recorded on them in different formats. This is very inconvenient for a user.
Even if the same unified recording format is employed, it will still be impossible to correctly display, on a TV monitor screen, images taken via different cameras having a solid state imaging device with different pixel configurations, since the sampling clock rate used in the digital-to-analog conversion varies depending on the pixel configuration employed.
Although there is no compatibility between cameras having different recording formats or different pixel configurations as described above, an image taken via any type of camera can be displayed on a monitor of a personal computer if a user converts the image data into a correct format using a personal computer with a software program.
FIGS. 36 to 38 illustrate conventional techniques of converting image data consisting 600 pixels in the vertical direction into image data consisting of 480 pixels in the vertical direction. In these figures, data is shown in an enlarged fashion starting with a pixel located at the uppermost point in a vertical direction. The pixel position relative to the starting point is denoted by the vertical pixel number or line number.
In FIG. 36, data 111 consists of noninterlaced 600-line data. A1, A2, A3, . . . denote interlaced lines of an odd-numbered field before being converted into noninterlaced form. Similarly, B1, B2, B3, . . . denote interlaced lines of an even-numbered field before being converted into noninterlaced form.
The data 111 can be converted into data 112 consisting of 480 lines by producing 8 lines from 10 lines of the data 111 according to the algorithm shown in FIG. 36.
That is, value of line 1 in the data 112 is given as (4A1+1B1)/5, value of line 2 is given as (3B1+2A2)/5, value of line 3 is given as (2A2+3B2)/5, value of line 4 is given as (1B2+4A3)/5, value of line 5 is given as (4B3+1A4)/5, value of line 6 is given as (3A4+2B4)/5, value of line 7 is given as (2B4+3A5)/5, value of line 8 is given as (1A5+4B5)/5, and so on.
When the above-described data conversion is performed on image data taken via a solid state imaging device of the interline type, the conversion can be performed in such a manner as represented by data 113 to 117 of FIGS. 37 and 38.
In the case of a camera equipped with a solid state imaging device of the interline type, the output signal represents the sum of some pixel values along the vertical direction. In FIG. 37, therefore, the output data along the vertical direction are designated by the scanning line number to distinguish them from the pixels of the solid state imaging device.
The data 113 represents pixels along the vertical direction of the solid state imaging device. Furthermore, to distinguish odd-numbered pixels from even-numbered pixels, the odd-numbered pixel data are denoted by a1, a2, a3, . . . , and the even-numbered pixel data are denoted by b1, b2, b3, and so on.
In the case of a solid state imaging device of the interline type with complementary color filters, two pixel signals along the vertical direction are combined into one signal. To achieve a sufficiently good vertical resolution, the addition operation is performed for different pairs of pixels depending on whether the field is an odd-numbered or an even-numbered one.
That is, for odd-numbered fields, as shown by the data 114 in FIG. 37, data A1 of line 1 is given as a1+b1, data A2 of line 3 is given as a2+b2, data A3 of line 5 is given as a3+b3, data A4 of line 7 is given as a4+b4, and data A5 of line 9 is given as a5+b5. Furthermore, other data are produced in a similar manner so as to obtain a complete frame data.
For even-numbered fields, as shown by the data 115 in FIG. 37, data B1 of line 2 is given as b1+a2, data B2 of line 4 is given as b2+a3, data B3 of line 6 is given as b3+a4, data B4 of line 8 is given as b4+a5, and data B5 of line 10 is given as b5+a6. Furthermore, other data are produced in a similar manner so as to obtain a complete frame data.
In the data 114 and 115 shown in FIG. 37, (R−Y) and (B−Y) denote color-difference signals obtained via color filters of the solid state imaging device.
From these data 114 and 115, noninterlaced data 116 can be obtained as shown in FIG. 38. In the data 116, missing color-difference signals are recovered by means of simple embedding.
That is, the data of line 1 is given by A1 wherein R−Y=A1 and B−Y=A0, the data of line 2 is given by B1 wherein R−Y=B1 and B−Y=B0, the data of line 3 is given by A2 wherein R−Y=A1 and B−Y=A2, the data of line 4 is given by B2 wherein R−Y=B1 and B−Y=B2, the data of line 5 is given by A3 wherein R−Y=A3 and B−Y=A2, the data of line 6 is given by B3 wherein R−Y=B3 and B−Y=B2, the data of line 7 is given by A4 wherein R−Y=A3 and B−Y=A4, the data of line 8 is given by B4 wherein R−Y=B3 and B−Y=B4, the data of line 9 is given by A5 wherein R−Y=A5 and B−Y=A4, the data of line 10 is given by B5 wherein R−Y=B5 and B−Y=B4, and so on.
If the data 116 is converted into the 480-line format in a manner similar to that employed to obtain the data 112 from the data 111, data 117 can be obtained as shown in FIG. 38.
If the resultant data is represented in terms of the vertical pixel data a1, b1, . . . , etc. of the data 113 of the solid state imaging device, it can be seen that the data calculated from two lines of the data 116 is equal to the data calculated from three pixel data. For example, the luminance signal Y of line 1 of the data 117 can be represented in terms of the vertical pixel data as:Y=(4a1+5b1+1a2)/5From the above representation, it can be seen that the luminance signal Y includes three pixel data a1, b1 and a2.
This technique shows the data undergoes an desirable dispersion in the vertical direction, which causes a great reduction in the vertical resolution relative to the original image.
As described earlier, in the standard regarding digital still cameras VIDEO 1, the numbers of pixels along vertical and horizontal directions of one image are defined on the basis of the number of pixels of a solid state imaging device and thus this standard is not suitable for displaying an image on a monitor of a personal computer system.
On the other hand, the pixel configuration consisting of 640×480 elements according to the VGA standard widely employed in personal computers cannot provide a horizontal resolution good enough to display an image on a TV monitor.
If the number of pixels is increased to a level greater than that defined by the VGA standard to improve the horizontal resolution, a full-area of image cannot be displayed on a TV monitor according to the NTSC standard, and such a format is not suitable to display an image on an EVF.
Furthermore, as described above in connection with FIG. 35, the various recording formats are incompatible with each other. This is another problem with conventional techniques.
Another serious problem in the conventional technique is that if image data taken via a camera equipped with a solid state imaging device of the interline type is converted with respect to the number of pixels using a personal computer with a software program, a great reduction in vertical resolution occurs.