1. Field of the Invention
The present invention relates to a 3-D video decoding method, and more particularly to a method of inter-frame Y/C separation.
2. Description of the Related Art
With the advance of technology, electronic devices, such as television, have been widely used to transmit news, information or distant images. Moreover, monitor systems are used to monitor the situations around houses. The systems or devices can transmit video signals from the senders to the receivers.
Light is composed of red (R), green (G) and blue (B) colors. Video signals to be transmitted can be transformed into the signals thereof for transmission. Due to the limitation of the transmission bandwidth, the video signals are transformed into luma data (Y) and chroma data (U and V) for solving the issue of bandwidth. For example, the relationships between R, G and B, and Y, U and V are: Y=0.299R+0.587G+0.114B; U=0.493(BY); V=0.877(R Y). The weightings to R, G and B of the Y formula represent the sensitivities of naked eye to different colors. U and V represent blue and red after the luma is removed. For natural light, U and V are 0, which means that there is no chromatic aberration.
During signal transmission, the chroma data are carried by sub-carrier signals and then mixed with luma data. Under the standard of National Television Standards Committee (NTSC), the signals are modulated into composite video signals with the relationship of Y+U*sin(ωt)+V*cos(ωt), wherein ω is equal to 2π*Fsc, and Fsc represents the frequency of sub-carrier signal. Under the standard of Phase alternating Line (PAL), the signals are modulated with 180-degrees phase difference. When PAL system modulates the lines of Y, U and V of the frames, the method of Y+U*sin(ωt)+V*cos(ωt), or Y+U*sin(ωt)−V*cos(ωt) is alternatively applied thereto. It means that when a line is modulated by Y+U*sin(ωt)+V*cos(ωt), the next line is modulated by Y+U*sin(wt)V*cos(wt).
After the receiver receives the composite video signals, the signals should be sampled first. A comb filter samples the signals in four folds frequency of Fsc. Therefore, each line of NTSC comprises 910 sample points; each line of PAL has 1135 sample points. Each frame of NTSC has 525 lines. Accordingly, each frame of NTSC has 477,750 sample points. Each frame of PAL has 625 lines and 709,379 sample points. Because the sample points are not multiples of the lines, different phase errors exist depending on the sample positions.
Generally, the essential part of the video decoding technology is the separation of luma and chroma. The Y/C separation affects the image quality. For the high quality of image, 3-D comb filter has been widely used.
When the 3-D comb filter processes the composite video signals, the signals are sampled by 90-degree phase difference. For NTSC, the signals are Y+V, Y+U, Y−V and Y−U when the sample phases are 0, 0.5π, π and 1.5π, respectively. FIG. 1 is a sampling result of the frame of NTSC. Referring to FIG. 1, the vertical axis represents the position x of the line in the frame, and the horizontal axis represents the position y of the pixel in the line. When two sampled data are in the neighboring frames and in the same position, the phase difference between the two data is 180 degree. The sampling relationship of the neighboring frames can also be interpreted by FIG. 1 by replacing the vertical axis with the serial number m of the frame.
Different from NTSC, PAL has 709,379 sample points having remainder 3 if divided by 4. Even if having the same position, the data in the first frame is Y+U, the data in the second frame is Y+V, and the data in the third frame is Y-U. FIG. 2A is the sampling results of PAL at the sample phase 0, 0.5π, π and 1.5π. Referring to FIG. 2A, the vertical axis represents the position x of the line in the frame or in the neighboring frame, and the horizontal axis represents the position y of the pixel in the line. To reduce the complexity for comb filter, a 45-degree phase shift is applied thereto, i.e., 0.25π, 0.75π, 1.25π and 1.75π. FIG. 2B is the sampling results of PAL at the sample phase 0.25π, 0.75π, 1.25π and 1.75π. Referring to FIG. 2B, the vertical axis represents the position x of the line in the frame or in the neighboring frame, and the horizontal axis represents the position y of the pixel in the line, wherein A=0.707(U+V) and B=0.707(U−V).
When a TV decoder samples the PAL signals in 4 times Fsc, each frame has 1135*625+4 sample points, not multiple of 625 or 1135. Therefore, when the sampling process is performed with 1135 sample points, phase errors occur. After 625 lines are sampled, the error is four pixels. Usually, the 4-pixel error is equally shared by the 625 lines. Accordingly, each line has 4/625 pixels phase shift, and the sample phase is not 0.25π, 0.75π, 1.25π and 1.75π. The modulation method of PAL is Y+U*sin(ωt)+V*cos(ωt), or Y+U*sin(ωt)−V*cos(ωt). When ωt is (0.25π+δ), (0.75π+δ), (1.25π+δ) and (1.75π+δ), wherein δ represents phase error, sin(0.25π+δ)=sin(0.25π)cos(δ)+cos(0.25π)sin(δ)=0.707(cosδ+sinδ)=0.707(1+e0); and cos(0.25π+δ)=cos(0.25π)cos(δ)sin(0.25π)sin(δ)=0.707(cosδ−sinδ)=0.707(1−e0). Therefore, Y+U*sin(ω)+V*cos(ωt)=Y+0.707(U+V+e0(U−V))=Y+A+eB. The other phase could be inferred with the method similar thereto, and the actual sampling results are shown in FIG. 2C, wherein the phase errors eA=e0A and eB=e0B. FIG. 2C is the actual sampling results of PAL at the sample phase 0.25π+δ, 0.75π+δ, 1.25π+δ and 1.75π+δ. Referring to FIG. 2C, the vertical axis represents the position x of the line in the frame or in the neighboring frame, and the horizontal axis represents the position y of the pixel in the line.
FIG. 3 is a block diagram of a prior art 3-D comb filter. Referring to FIG. 3, the prior art comb filter comprises an inter-frame Y/C separator 310, a 2-D intra-filed Y/C separator, i.e., 2-D comb filter, 320, a motion detector 330, a memory 340 and a mixer 350. The composite video signal 301 is a sampled signal, and the Fm+1 represents the composite video signal of the m+1 frame. The memory 340 temporarily stores the composite video signal 301 and provides the composite video signal 305 of the frame m. The 2-D comb filter 320 receives the composite video signal 305 and performs Y/C separation by the relationship between the pixels, outputting the separated video signal 321.
The Y/C separation of the motion video signal is completed by the 2-D comb filter 320. However, the 2-D comb filter 320 results in edge obscure for still video signal. In order to improve the image quality, the still video signal is processed by the inter-frame Y/C separator 310. The prior art inter-frame Y/C separator 310 receives the sampled data of the Fm+1 and Fm simultaneously, and uses the inter-frame Y/C separation method of the present invention, outputting the separated video signal 311. The motion detector 330 determines whether the composite video signal 301 is a motion or a still signal. The prior art motion detector 330 is adapted to receive the composite video signal 301 and the separated video signal 321 and measures the luma difference and chroma difference of the neighboring frames, outputting the selected signal 331 thereby. The mixer 350 selects the separated video signals 321 or 311, or mixes them according to the selected signal 331, outputting the separated video signal 351.
Following are the descriptions of the prior art method of intra-frame Y/C separation as to NTSC. Referring to FIG. 1, the prior art method adds and averages the composite video signals of the pixels with the same position, such as y, of the neighboring frames for measuring the luma data; if one subtracts another, the chroma data can be obtained. If noises are added during the modulation or signal transmission, the noises reduce the quality of the images.
Following are the descriptions of the prior art method of intra-frame Y/C separation as to PAL. Referring to FIG. 2C, the prior art method adds and averages the composite video signals of the pixels with the same position, such as y, of the frames, such as m−1, m and m+1, for removing the chroma data A or B, and the phase error eB or eA, and measuring the luma data Y. If the signals of the frames subtract each other, the luma data Y can be removed, but the phase error eB or eA cannot be avoided. Therefore, stripes happen on the separated signals.