Various methods exist to scale a low resolution image to a high resolution image, including a method called sparse representation based image super resolution. Sparse representation super resolution typically involves two stages: a first offline training stage and a second reconstruction stage. The training process generally derives a dictionary for low resolution features, referred to as the low resolution dictionary, and a dictionary for high resolution features, referred to as the high resolution dictionary. Features generally consist of the high frequency components of the low resolution and high resolution images used for training. After training, the dictionaries can be used to create high resolution versions of new low resolution images. The features are optimized to minimize the number of dictionary entries (features) that need to be used to match the patches in the training library.
Many current methods of sparse representation super resolution produce only one low resolution dictionary and one high resolution dictionary. Because a great variety of structures such as edge, corner, chessboard patterns, random or regular textures exist in natural images; using one low resolution and one high resolution dictionary for every sample reduces the amount of information available to accurately reconstruct the image or increases the size of the dictionaries. It is advantageous to create multiple dictionaries, each optimized for a particular type of structure or texture.
In addition, several different kinds of training strategies exist for developing the low and high frequency dictionaries. For example, one method trains a low resolution dictionary and determines the sparse coefficients for all the feature vectors of low resolution patches and then uses those coefficients to develop a high resolution feature dictionary that best fits the high resolution training data. During image reconstruction, the samples of the high resolution features and the sparse coefficients corresponding to their low resolution features are used to create the high resolution version of the image. These learned dictionaries are referred to as sequential dictionaries. Another method creates the high resolution and low resolution features for the high and low resolution dictionaries simultaneously. In this method, the sparse coefficients and the features are developed simultaneously to optimize the performance with the training images. These dictionaries are referred to as joint dictionaries.
The sequential dictionaries method produces the sparse coefficients only by using the low resolution features, and the high resolution feature dictionary results from these coefficients, which is not optimal for the high resolution samples. This method does not produce the most detailed results, but the results are more stable and have fewer artifacts. In the joint dictionary approach, the optimum sparse coefficients used during training to generate the feature vectors using the low and high resolution samples will be different than those used during the reconstruction stage. This is because only the low resolution samples are known at the reconstruction stage. This may result in artifacts during reconstruction because only the low resolution samples are available to determine the sparse coefficients. However, this approach generally has more details than the sequential dictionaries approach.