Traditionally, a single set of horizontal and vertical scale factors have been used for video scaling an entire image. In such traditional rectilinear scaling, a destination image is orthogonal to a source image. For such traditional scaling, a fixed filter kernel size and a small set of filter phases are used for decomposition of a filter into a cascade of one-dimensional filters, sometimes referred to as separable filters. However, such traditional scaling is generally not suitable for nonlinear image mapping.
For nonlinear image mapping or remapping, such as for example for off-axis dome projection, a video signal may be pre-distorted to look correct from a viewing position, such as when projected onto a dome for example. Heretofore, there were generally three types of video signal predistortion, namely: off-line, frame-by-frame processing using high quality filtering techniques in software, such as Adobe After Effects for example; render-to-texture by graphics processing units (“GPUs”); and projection of an image onto a curved mirror with geometry corresponding to that of a dome. However, each of these conventional types of video signal predistortion has a limitation. For example, off-line, frame-by-frame processing conventionally takes a significant amount of time and thus may not practical in some real-time live video applications. GPU render-to-texture may provide real-time performance, but it generally does so with diminished image quality, which may be due in part due to limitations associated with bilinear blending filtering used by GPUs. Projection onto a curved mirror is a mechanical solution that lacks flexibility.
Accordingly, it would be both desirable and useful to provide video signal predistortion suitable for real-time applications with enhanced quality and flexibility as compared with one or more of the above-mentioned conventional types of video signal predistortion.