An image signal typically used in the systems such as NTSC, PAL, SECAM, hi-vision (1080i or the like) employs an interlace method in which alternate scanning lines are decimated to be transferred for forming an image of one frame with two fields, and when such an image signal is displayed on a display device employing a progressive scanning method such as a liquid crystal display or a PDP, it is always necessary to perform IP conversion.
In general IP conversion, the motion in every pixel is obtained from an image signal, so that static image processing and dynamic image processing are switched or mixed in accordance with the detected motion, and thus, a satisfactory result is obtained. This IP conversion is designated as motion adaptive IP conversion.
As a conventional example, a part of the motion adaptive IP conversion concerned with IP conversion of a color-difference signal is shown in a block diagram of FIG. 21. In FIG. 21, a color-difference signal IP converter 1 includes a color-difference static image processing unit 6 for generating a static image signal through field insert, a color-difference dynamic image processing unit 7 for generating an interpolated pixel from pixels included in a field, and a static/dynamic mixing unit 8 for mixing outputs of the color-difference static image processing unit 6 and the color-difference dynamic image processing unit 7 in accordance with motion detection information. A color-difference signal field delay is supplied to the color-difference static image processing unit 6, a color-difference signal current field is supplied to the color-difference static image processing unit 6 and the color-difference dynamic image processing unit 7, and a result of the IP conversion is output from the static/dynamic mixing unit 8.
With respect to a still picture (a static image), a picture of an initial one frame can be generated through insert of picture of two successive fields (inter-field insert). This conversion into a progressive signal through the insert is realized by the color-difference static image processing unit 6. Also, with respect to a moving picture (a dynamic image), it is necessary to perform interpolation (intra-field interpolation) based on pixels included in a field because a picture shifted in alternate lines is generated through simple insert. This processing is realized by the color-difference dynamic processing unit 7. Since most of image signals include a still portion and a moving portion in one screen, the static/dynamic mixing unit 8 mixes conversion results so as to output an ultimate IP conversion result of a color-difference signal on the basis of detected motion in every pixel.
Recently, digitalization of image signals has been developed, and in particular, digital broadcasting, DVDs and the like employing image compaction technique typified by MPEG have been remarkably spread. In the compaction of image signals employing the MPEG, the fact that a human visual characteristic is less sensitive to a color-difference signal than a luminance signal is utilized, and a process for decimating the number of lines of a color-difference signal to a half of the number of lines of a luminance signal is performed. This operation will be described with reference to FIGS. 22A through 22C.
FIGS. 22A through 22C show an operation for converting a progressive signal into an interlaced signal in employing MPEG2, in which Y indicates a luminance signal and C indicates a color-difference signal. Color-difference signals are actually classified into two kinds, namely, R-Y signals and B-Y signals, but are commonly described as the color-difference signal C in the following description because these two kinds of signals are similarly processed. FIGS. 22A through 22C show relationships among pixels obtained in converting a progressive signal into an interlaced MPEG2 signal, and the vertical direction of the drawings corresponds to the vertical direction of a picture on the screen.
FIG. 22A shows the arrangement of pixels in a state of a progressive signal, and in employing, for example, the NTSC system, the number of effective lines is 480, which means that there are 480 pixels arranged along the vertical direction of FIG. 22A. As shown in FIG. 22A, the pixels are present in the same number of lines in the signal Y and the signal C in the progressive state. Each numerical value shown in the drawing indicates the level of the corresponding pixel, and a state where the level is changed from 100 to 0 in the downward direction is shown both in the signals Y and C. This state is designated as “progressive 4:2:2”. Although the number of lines of an R-Y signal and a B-Y signal is actually decimated to a half also along the horizontal direction in 4:2:2, the description is herein given with respect to the line direction alone.
FIG. 22B shows a state where one of two lines of color-difference signal is decimated in the progressive 4:2:2 of FIG. 22A, and this state is designated as “progressive 4:2:0”. In order to prevent frequency folding derived from the decimation, a vertical LPF is provided so that the center of gravity of the pixels can fall in the middle of the signal Y. In FIG. 22B, the simplest LPF for obtaining an average of 2 taps is provided.
FIG. 22C shows a state obtained by converting the state of FIG. 22B into an interlaced signal. An interlaced signal is regarded as a signal in which alternate lines of progressive signal are decimated to be decomposed into two fields, and a field starting from a top line is designated as a top field and a field starting from a bottom line is designated as a bottom field. Although the bandwidth is limited also in interlacing, both the signals Y and C are simply decimated to be halved in FIG. 22C for making the operation easily understood.
As the form of an image signal, a state where the number of lines of a color-difference signal is the same as the number of lines of a luminance signal is designated as 4:2:2, and a state where the number of lines of a color-difference signal is a half of the number of lines of a luminance signal is designated as 4:2:0.
In the digital broadcasting or a recording medium such as a DVD or an HD recorder employing the MPEG compaction, a color-difference signal is in the 4:2:0 state.
On the other hand, in an output of digital image equipment, the number of lines of a luminance signal and that of a color-difference signal are defined to be the same, and therefore, a process for converting the 4:2:0 into the 4:2:2 is performed after MPEG decode in a digital broadcasting decoder or an MPEG decoder included in a DVD, STB or HD recorder. Also, even within the digital image equipment or within a digital decoder integrated circuit, an output of a digital decoder may be dealt with in the 4:2:2 state.
Next, an example of the conversion from the 4:2:0 to the 4:2:2 and an example of static image processing performed in the IP conversion will be described.
FIGS. 23A through 23C show, as a method for converting the 4:2:0 to the 4:2:2, line doubler processing in which one line is output twice. Since this processing is easily realized, a DVD player or the like for decoding/outputting by this processing is available as an actual product. FIG. 23A shows an MPEG2 stored state of interlaced 4:2:0, and in FIG. 23B, the 4:2:2 state is realized by restoring the number of lines by repeating one pixel through the line doubler of a color-difference signal. FIG. 23C shows a signal obtained through the IP conversion of the 4:2:2 interlaced signal of FIG. 23B through the static image processing by the field insert. Although a luminance signal Y in which the lines have not been decimated to a half is completely restored, to-and-fro of the level is caused at a level change point in a color-difference signal C. This appears as jaggy in a picture, and a picture having vertical burr causing a sense of incompatibility is generated.
FIGS. 24A through 24C show, as another method for converting the 4:2:0 to the 4:2:2, processing for interpolation generating decimated lines based on pixels of upper and lower lines within a field. In FIG. 24A, an MPEG2 stored state of an interlaced 4:2:0 signal is shown with the center of gravity of pixels considered, and in FIG. 24B, pixels decimated in the 4:2:0 state are interpolation generated on the basis of the center of gravity of one upper and one lower pixels as the simplest example so as to be converted into the 4:2:2. FIG. 24C shows a signal obtained through the IP conversion of the 4:2:2 interlaced signal of FIG. 24B through the static image processing by the field insert. Also in this example, the to-and-fro of the level is caused at a level change point in a color-difference signal although the extent is lower than in the line doubler method.
In either of the above-described examples, the decimated lines are restored within the field in the interlaced 4:2:0 state so as to generate the 4:2:2, and hence, the interpolation is performed without considering the relationship between interlaced fields. Therefore, there arises a problem when the inter-field interpolation is performed thereafter.
As another method of the conventional IP conversion, the inter-field insert is performed not in the 4:2:2 but in 4:2:0 obtained by decimating lines of the 4:2:2 again (see, for example, Patent Document 1).
As still another method, in the case where a color-difference signal is 4:2:0 obtained by the line doubler, the inter-field insert is performed on new 4:2:2 generated by interpolating pixels having been interpolated through the line doubler again based on upper and lower pixels (see, for example, Patent Document 2).
Patent Document 1: International Publication Pamphlet No. 02/052849
Patent Document 2: Japanese Laid-Open Patent Publication No. 2006-121568