The importance of video compression has increased manifold due to an exponential increase in on-line streaming and increased volume of video storage on the cloud. In conventional video coding or compressing algorithms, block based compression is a common practice. The video frames may be fragmented into blocks of fixed size for further processing. However, the fragmentation may result in creation of redundant blocks which may increases the computation requirement. Further, use of hybrid video coding methods to decide the prediction modes may complicate the process.
Some of the conventional methods discuss video compression using learned dictionaries, either with fixed or self-adaptive atoms, plus fixed transform basis. In such methods, blocks may be represented by weighted dictionaries and transformed basis co-efficient. These conventional methods may implement deep learning for video compression; however, these conventional methods may not use variable block sizes and may set forth the idea of fixed size blocks for processing. This may further result in redundancy in processing as many of the blocks may have the same features.