The most popular compression algorithms use one of the following two approaches for encoding data, i.e. Huffman encoding and arithmetic/range encoding.
Huffman encoding is much faster but may result in low compression rates. Further, arithmetic encoding, on the other hand, results in higher compression rates at the expense of additional computational cost (i.e. slower) for encoding and decoding. Moreover, Asymmetric Numeral Systems (ANS) represent a relatively new approach to lossless entropy encoding, with the potential to combine the speed of Huffman encoding and compression ratios of arithmetic encoding (Jarek Duda, Asymmetric numeral systems: entropy coding combining speed of Huffman coding with compression rate of arithmetic coding, arXiv:1311.2540 [cs.iT], 2013, F. Giesen, Interleaved entropy coders, arXiv:1402.3392 [cs.IT], 2014).
Further, ANS has two general drawbacks:                1. it relies on a static probability distribution, and        2. it decodes symbols in the reverse order to encoding (i.e. it is LIFO—Last In First Out), making it unsuitable for very large blocks of data.        
There are several implications that result from ANS's reliance on static probability distributions:                1. a static probability distribution either needs to be built on the full dataset, or a subset of it. This needs to be done before compression can proceed.        2. The probability distributions themselves can be resource intensive to store, especially for compressing small blocks of data. For large blocks of data, the probability distribution is expensive to calculate. Since a fixed probability distribution is required before compression can begin, building a distribution table on a large dataset is not a practical option for streaming applications.        3. If a symbol is encountered that has an expected probability of 0 then it will fail encoding and corrupt subsequent data        4. Adjusting the probability of rare symbols to a minimum positive value can greatly affect the compression achieved. For example if only 12 symbols are used for 99.9999% of the data, but there are 256 total symbols supported, then allocating a non-zero probability to the other 244 symbols can result in a considerable reduction of compression efficiency for the 12 main symbols, especially if probability distributions are on an integer scale of 4096.        
Also, there is another issue with rANS variant: despite its highly efficient coding scheme, it assumes that each symbol is independent of history (i.e. rANS does not exploit inter-symbol redundancy). This means that, unless carefully managed, rANS by itself can easily perform worse when compared to other approaches such as zlib even though that only uses Huffman compression.
Therefore, there exists a need for an efficient data compression method and a system.