Devices including televisions, broadcast systems, mobile devices, and both laptop and desktop computers, may display video in response to receipt of video or other media signals, such as interlaced video signals. Typically, before signals are provided to devices, signals are filtered such that various characteristics, such as subjective quality and brightness, of the video are improved. Because video signals vary greatly both in content and nature, numerous filtering techniques have been developed directed to, for instance, the removal of distortion from a video signal. Filtering techniques include many approaches and typically are directed to filtering a native resolution or format of a video signal. For example, if content of a video signal is interlaced and/or has a particular resolution, filtering is applied to the interlaced content and/or at the particular resolution. Often, however, these approaches introduce artifacts, overfilter, and/or create a trade-off between spatial performance and resolution in video signals.
Moreover, many traditional filtering techniques are directed to filtering fields of a video signal independently, thereby compromising relationships between fields of a same frame. While some implementations have attempted to mitigate this, solutions have proven to be complicated and computationally demanding.