1. Field of the Invention
The invention relates in general to an apparatus for video data processing and method therefor, and more particularly to an apparatus and a method for video data processing in digital video disk (DVD) decoding, capable of performing video-frame aspect-ratio conversion.
2. Description of the Related Art
In order to satisfy the consumers' demand for recording media with larger data capacity, as well as better video quality and convenience in utilization, a number of leading vendors and main developers joined together in 1996 to make the specification of a new recording medium—digital video disk (DVD) and expected the DVD to substitute for the conventional compact disc (CD). In the DVD specification, the physical size of a DVD is substantially equal to that of a CD, but the data capacity of a DVD is at least 7 times larger than that of a CD in the same physical size. According to the DVD specification, DVDs can record much more video data and are much more convenient for the consumers. In the DVD specification, the standard for video optical disks is designated as DVD-Video format. The DVD-Video format allows different video compression methods to be used in making DVDs. Particularly, moving picture experts group 2 (MPEG-2) is used as the video compression method. Because of DVDs' high data capacity, DVDs are not only recording media for video, but also versatile recording media. The scope of the DVD specification has been extended for audio recording media, writable recording media, multiple rewritable and writable media, and the corresponding standards and products for DVD-Audio, DVD-ROM, DVD-RAM are accordingly made.
Regarding playing a video, a user may notice that a DVD processing device, such as a DVD player, has a special function of adjusting the aspect ratio of the video for different display modes. That is, a display device for displaying the video has its specific aspect ratio, for example, an aspect ratio of 4:3, and may be different from the video source, for example, a 16:9 video. The video, in this case, can be scaled for the aspect ratio of the display device by performing specific video data processing in decoding, called scaling. In brief, after decoding, the output video to the display device has been scaled up or down. In addition, DVD players generally provide different aspect ratio display modes so as to correspond to different display devices' aspect ratio.
Another feature of the DVD-Video format is the sub-picture system, which provides a variety of functions for users. Sub-pictures, such as subtitles, are conventionally superimposed on the video, referred to as main pictures, in a video compact disc (VCD). In contrast, DVD allows subtitles to be recorded as sub-pictures that are additionally encoded as a plurality of sub-picture units (SPUs) stored in a data stream, other than the MPEG-2 bit stream of the video. Hence, DVD-Video disc producers can take the advantages of DVD sub-picture system to design and create sub-pictures separately so as to provide the users with a variety of functions in terms of sub-pictures, for example, selection for subtitles or information in different languages, or an interactive operating environment with menus for interaction with the DVD player. A DVD-Video disc can provide at most 32 tracks, so subtitles in at most 32 languages can be provided. In addition, the user can optionally turn on or off the sub-pictures, as well as select colors of the sub-pictures, through the DVD player. However, the display colors in each sub-picture are limited to four colors since the palette information allocated to a sub-picture is defined by a 2-bit data field.
When a video in DVD-Video format is displayed, the MPEG encoded video data is decoded as decoded video data containing the main pictures. If corresponding sub-pictures are turned on, SPU are decoded by an SPU decoding method as decoded SPU data containing the sub-pictures. The sub-pictures are further superimposed on the main pictures by video data processing. The bit stream read from a DVD is typically divided into different bit streams by a parser and these bit streams are fed into respective decoders for decoding with respect to their characteristics, such as video, audio, and SPU, and so on. For example, the parser identifies video, audio, and SPU bit streams and outputs the video bit stream to a video decoder, the audio bit stream to an audio decoder, and the SPU bit stream to an SPU decoder, respectively. In addition, the video decoder decodes the video bit stream as decoded video data while the SPU decoder decodes the SPU bit stream as decoded SPU data. The decoded SPU data is combined with the decoded video data so as to produce a combined signal in digital. A display encoder, such as a television encoder, then converts the combined signal into an analog display signal so as to display the video, for example, on a television set.
As mentioned above, a DVD player provides different video scaling modes corresponding to aspect ratios for different display devices, and the output video signal to the display devices are scaled up or down by a scaling process. For example, when a 16:9 video stored in a DVD-Video disc is intended to be displayed on a television compliant to National Television Standards Committee (NTSC) system with an aspect ratio of 4:3, the DVD player needs to perform a scaling process on the video so as to display the full aspect of the video. It should be noted that scaling of the sub-pictures and the scaling of the main pictures are both required to be done so as to maintain the desired display quality of the sub-pictures in different scaling modes. If the scaling of the sub-pictures is not done or not properly done, the subtitles or texts in the menus of the sub-pictures may be distorted, thus degrading the display quality.
The SPU decoding, video scaling, the mixing of video and sub-pictures are conventionally done by a video data processing system shown in FIG. 1. The video data processing system in FIG. 1 has three main components: a video data scaling unit 10, an SPU decoding and scaling unit 20, and a video and sub-picture data mixer 190. In the conventional approach, the video data scaling unit 10 first performs a scaling process on MPEG-decoded video data, designated as VD, according to a preset scaling mode with interpolation, and then outputs scaled decoded video data, piece by piece. The SPU decoding and scaling unit 20, on the other hand, receives the SPU bit stream (designated as SB), performs the SPU decoding of the SPU bit stream, scales decoded SPU data by interpolation according to the aspect ratio of the scaling mode settings, and then outputs scaled decoded SPU data, piece by piece. The video and sub-picture data mixer 190 then combines the scaled decoded video and the scaled decoded SPU data so as to the required combined signal.
In the video data processing system shown in FIG. 1, the decoded video data as well as the decoded SPU data is processed on a horizontal line basis. In addition, the scaling of the main pictures and the scaling of sub-pictures are based on interpolation. For example, the scaling up of the main pictures by interpolation is performed based on every two adjacent horizontal lines of a current main picture. After interpolation, the video data of a new horizontal line is generated and is arranged between the two adjacent horizontal lines in a new frame, wherein the video data of the new horizontal line is based on a weighted sum of the video data of two adjacent horizontal lines. Suppose that VDk-1 and VDk represent video data of a (k−1)th and kth horizontal lines respectively. In the case of scaling up the current main picture, the video data of the new horizontal line can be determined by VDk-1×W1+VDk×W2, where W1 and W2 are weights. The weights can be changed according to the requirements for the scaling modes to scale up or scale down the video frames.
Since the scaling of the video and sub-pictures is done by using interpolation, the video data scaling unit 10 and the SPU decoding and scaling unit 20 each include two buffers and one interpolator. These buffers serve as first-in first-out queues (FIFOs).
The video data scaling unit 10 includes a video data buffer 110, a video data backup buffer 120, and a video data interpolator 130. The video data buffer 110 is used for receiving decoded video data (VD) from an MPEG video decoder (not shown). The decoded video data outputted by the MPEG video decoder can be temporarily stored in a dedicated memory and then a piece of the video data, possibly representing a complete horizontal line or a segment of a horizontal line of a current video frame, is read from the dedicated memory to the video data buffer 110 for use in scaling. The video data backup buffer 120 is employed to store the decoded video data with respect to a previous horizontal line of the current video frame. With the video data backup buffer 120, the video data interpolator 130 can read the decoded video data with respect to two adjacent horizontal lines during interpolation, and does not need to read the video data of the previous horizontal line from the dedicated memory for the MPEG video decoder again. The video data backup buffer 120, thus, is required to have data capacity for storing video data of a whole horizontal line. The video data buffer 110, on the other hand, is not required to have the capacity of storing video data of a whole horizontal line. For example, when the processing of current video data is almost complete, the video data of the next horizontal line is read from the dedicated memory to the video data buffer 110.
When the scaling of the video frame is performed, the video data buffer 110 provides video data VDk of a current line segment while the video data backup buffer 120 provides video data VDk-1 of a previous line segment adjacent to the current horizontal line segment. The video data VDk and VDk-1 are simultaneously fed into the video data interpolator 130. In this way, the video data interpolator 130 performs interpolation to determine the video data of the line segment to be inserted between the two adjacent line segments as VDk-1×W1+VDk×W2.
Referring to FIG. 1, the SPU decoding and scaling unit 20 includes an SPU buffer 140, an SPU decoder 150, an SPU backup buffer 160, an SPU palette device 170, and an SPU interpolator 180. Each SPU includes pixel data of the sub-pictures, and the pixel data is run-length coded (RLC). The SPU decoder 150 performs SPU decoding on an SPU bit stream (SB). The SPU decoder 150 outputs pixel data of a line segment of a current sub-picture to the SPU backup buffer 160 in the form of corresponding color codes. The SPU palette device 170 receives the color codes from the SPU decoder 150 and generates color data in Y/Cr/Cb component video signal format. Meanwhile, the SPU palette device 170 also receives color codes of the previous line segment of the current sub-picture from the SPU backup buffer 160, and generates the corresponding color data corresponding to the current line segment and the previous line segment to the SPU interpolator 180. Further, the SPU buffer 140 is employed in the SPU decoding and scaling unit 20 to adjust the data flow of the SPU bit stream to the processing capability of the SPU decoder 150, wherein the SPUs in the SPU bit stream are fed into the SPU decoder 150 piece by piece. When the sub-pictures are to be scaled up, the SPU palette device 170 outputs the color data SDk and SDk-1 respectively containing the information of the current line segment and the previous line segment. The SPU interpolator 180 receives the color data SDk and SDk-1 simultaneously and performs interpolation according to the preset scaling mode above to determine a new line segment to be inserted between the two adjacent line segments (i.e. the current and the previous line segments) as SDk-1×W1+SDk×W2.
The required video data VS is finally obtained by mixing the output data of the video data interpolator 130 and the SPU interpolator 180 by the video and sub-picture data mixer 190.
As described above, in order to meet the needs of scaling for different display devices with different aspect ratios, the conventional video data processing system is used with the interpolation method to perform scaling of the video and the sub-pictures. Unfortunately, the video data processing system has the following disadvantages when implemented onto circuits. First, two individual backup buffers are required for the interpolation processing with respect to video and sub-pictures. Secondly, the SPU palette device is required to simultaneously convert two SPU color codes into color data. As a result, the SPU palette is generally implemented with either a conversion circuit operating at double frequencies with respect to the operating frequency of the video data processing system, or two conversion circuits operating together in parallel. Thirdly, two individual interpolators are required to perform interpolation for either video or sub-pictures. Thus, the circuit size and the manufacturing cost are increased.