Television signals are currently broadcast and distributed mostly in `coded` form, that is to say that the original colour picture comprising red, green and blue component signals has been encoded into a single composite signal in accordance with the standards of the PAL, NTSC or SECAM systems, or their variants.
These systems were evolved largely on the basis of broadcast requirements, and their characteristics were determined by such considerations as compatability with existing monochrome broadcast formats. Consequently, there are aspects of these systems which do not ideally suit the studio environment, where the video signal may be processed by a long chain of equipment and the main requirement is that minimal cumulative degradation takes place. As much of this equipment is now using digital storage and processing techniques, it is appropriate that new video standards have been introduced which operate in the `component` domain, that is `RGB` (red, green, blue) or `YUV` (matrixed RGB, being the three signals derived for use in the `coded` systems), the video signal being transferred between items of equipment in digital P.C.M. form. There is therefore a need for an interface between the coded analog and a component environment to allow conventional analog signals to be further processed in component form.
Although this interface has obviously existed for as long a time as the coded system itself, (for example, in a colour television receiver where RGB signals are utlimately required for display), there are several levels of refinement associated with the decoding process, the more sophisticated decoding techniques being devised in an attempt to remove the degradations introduced into the decoded component signals as a result of the compromises inherent in coded signal standard itself. One of the major compromises associated with all existing coded formats is the requirement that the coded colour signal is contained within a bandwidth no greater than the corresponding monochrome standard, this being achieved by modulating the colour information (the U and V components) onto a subcarrier situated towards the top of the video bandwidth. This shared bandwidth leads to crosstalk between the signal components, the exact nature of the crosstalk being characteristic of the coded system in use.
These effects are generally refered to as cross-colour and cross-luminance, an example being the appearance of coloured fringes in areas containing high-frequency picture detail (cross-colour).
The simplest form of decoder attempts to separate the luminance and chrominance signal components purely on the basis of their predominant frequency bands, the luminance information being regarded as occupying the lower part of the spectrum and the chrominance the upper part, and this technique is basically that applied in the domestic TV set, where the above effects may be observed.
The more sophisticated systems apply analog comb filter techniques to separate the luminance and chrominance components, the difficulty of separating signals occupying the same parts of the spectrum being overcome by exploiting the redundancy of information contained in a video signal when several neighbouring scan lines contain very similar information. It must be emphasised that this assumption is fundamental to the operation of line-based comb filters, as information theory shows that in the general case, the extra chrominance information cannot be introduced as an independent signal, without the occurence of crosstalk.