Image super-resolution (SR) refers generally to techniques that enhance the resolution of images. In “Fast and robust multiframe super-resolution” [Farsiu2004], a reconstruction technique is disclosed that needs several images of the same scene with sub-pixel displacements, which are used to build a set of linear constraints for the new high-resolution (HR) pixel intensities. If enough images are provided, the set of equations is determined and can be solved to obtain the HR image. This approach, however, depends on the accuracy of the required registration process and is limited to small magnification factors.
In “Learning low-level vision” [Freeman2000], the prediction from low-resolution (LR) to HR patches is learned through a Markov Random Field and solved by belief propagation. However, these approaches require large training datasets, in the order of millions of patch pairs, thus being computationally costly.
A super-resolution algorithm disclosed in “Super-resolution from a single image” [Glasner2009] exploits cross-scale self-similarity across several scales in an image pyramid, in order to recover the high-resolution details that are missing in the low-resolution image. This algorithm cannot deal with high magnification factors and needs to perform several cascaded smaller magnification factors. Thus it is noticeably slow. One of SR's main challenge is discovering mappings between LR and HR manifolds of image patches.