Large amounts of video information are sometimes processed by characterizing the video information in terms of pixels. A pixel is often used to represent the smallest element of a display surface that can be assigned independent characteristics. Due to the large amount of data required to characterize video information, efficient techniques are required to compress and manage the video information. For example, hierarchical motion searching techniques are often employed when processing and handling large amounts of video information.
Scaling is a common technique used in video processing. For example, a 2:1 scalar may be used to implement a hierarchical motion vector search. Pixel interpolation is another known technique to reduce the amount of data which is processed, stored, transferred, etc. In a pixel interpolator, attributes of a processed pixel are derived by averaging the values of a number of stored pixels. The values may further be weighted depending on the location of the pixels.
One known scaling technique uses a video camera to provide video data to a multi-tap, multiple-phase programmable scaling filter. The video data is transmitted from a source, such as a camera, one pixel at a time with the sequence of pixels being organized into rows, which make up the image. Each row is then provided from the camera left to right and then top to bottom. The programmable scaling filter performs a data filtering operation as well as scales the video data in a horizontal direction as the pixels from the video source pass through the filter, one pixel at a time. The filtered and horizontally scaled image is then stored in memory. The image is next scaled in the vertical direction by a digital signal processing (DSP) device. The DSP device reads the horizontally scaled field image from the memory and executes a vertical scaling routine to provide a horizontally and vertically scaled image for subsequent processing.
For most video processing applications, the DSP may be designed to operate in several different modes and to perform a variety of different tasks. In general, these modes and tasks increase the temporal overhead of the system by increasing the overall processing time. The increased processing time results from the latency associated with the required processing and limits the speed of the video processing systems.