Nowadays high resolution images and videos are ubiquitous. Yet image processing systems for editing such data are typically not fast enough to allow real-time processing. For example, many modern image and video editing systems require spatially variant (2D or higher dimensional) processing which is typically complex, resource intensive and time consuming. One of the most sought after requirements is edge-sensitivity whereby the image processing system is able to change its behavior depending on the local image contrast. This can also be referred to as contrast-sensitive image and video editing or processing. Such dependence on the data content limits the speed of current image processing systems.
At present image editing systems typically use a variety of different approaches to achieve different image editing tasks. These tasks typically need to be carried out separately on an image or video without the ability to reuse results achieved during one task as part of the method for another task. However, there is a need to unify previously diverse image editing techniques in such a manner that at least some processing may be shared between tasks so that computational resource requirements may be reduced.
There is also a need to provide new image and video editing ability as well as to improve, enhance and speed up existing image and video editing systems.
The embodiments described herein are not limited to implementations which solve any or all of the disadvantages of known image or video editing and processing systems.