1. Field of the Invention
The invention relates to a method and an arrangement for arithmetically encoding and decoding binary states and to a corresponding computer program and a corresponding computer-readable storage medium which may in particular be used in digital data compression.
2. Description of the Related Art
The present invention describes a new efficient method for binary arithmetic coding. There is a demand for binary arithmetic coding in most different application areas of digital data compression; here, in particular applications in the fields of digital image compression are of special interest. In numerous standards for image coding, like e.g. JPEG, JPEG-2000, JPEG-LS and JPIG, methods for a binary arithmetic coding were defined. Newer standardization activities make also the future use of such coding technologies obvious in the field of video coding (CABAC in H. 264/AVC) [1].
The advantages of arithmetic coding (AC) in contrast to the Huffman coding [2] which has up to now been used in practice, may basically be characterized by three features:                1. By using the arithmetic coding, by simple adaptation mechanisms a dynamic adaptation to the present source statistic may be obtained (adaptivity).        2. Arithmetic coding allows the allocation of a non-integer number of bits per symbol to be coded and is therefore suitable to achieve coding results which illustrate an approximation of the entropy as the theoretically given lower bound (entropy approximation) [3].        3. Using suitable context models statistical bindings between symbols for a further data reduction may be used with arithmetic coding (intersymbol redundancy) [4].        
As a disadvantage of an application of the arithmetic coding, generally the increased calculation effort compared to Huffman coding is regarded.
The concept of the arithmetic coding goes back to the basic documentation for information theory by Shannon [5]. First conceptional construction methods were firstly published by Elias [6]. A first LIFO (last-in-first-out) variant of the arithmetic coding was designed by Rissanen [7] and later modified [8] [9] [10] by different authors to the FIFO implementations (first-in-first-out).
All of those documents have the basic principle of recursive partial interval decomposition in common. Corresponding to the given probabilities P(“0”) and P(“1”) of two results {“0”, “1”} of a binary alphabet a primarily given interval, e.g. the interval [0, 1), is recursively decomposed into partial intervals depending on the occurrence of individual events. Here, the size of the resulting partial interval as the product of the individual probabilities of the occurring events is proportional to the probability of the sequence of individual events. As every event Si adds a contribution of H(Si)=−log(P(Si)) of the theoretical information content H(Si) of Si to the overall rate by the probability P(Si), a relation between the number NBit of bits for illustrating the partial interval and the entropy of the sequence of individual events results, which is given by the right side of the following equation:NBit=−log ΠiP(Si)=−Σilog P(Si)
The basic principle, however, first of all requires a (theoretically) unlimited accuracy in the illustration of the resulting partial interval and apart from that it has the disadvantage that only after the coding of the last result may the bits for a representation of the resulting partial interval be output. For practical application purposes it was therefore decisive to develop mechanisms for an incremental output of bits with a simultaneous representation with numbers of a predetermined fixed accuracy. These were first introduced in the documents [3] [7] [11].
In FIG. 1, the basic operations for a binary arithmetic coding are indicated. In the illustrated implementation the current partial interval is represented by the two values L and R, wherein L indicates the offset point and R the size (width) of the partial interval, wherein both quantities are respectively illustrated using b-bit integers. The coding of a bit ∈{0, 1} is thereby basically performed in five substeps: In the first step using the probability estimation the value of the less probable symbol is determined. For this symbol, also referred to as LPS (least probable symbol), in contrast to the MPS (most probable symbol), the probability estimation PLPS is used in the second step for calculating the width RLPS of the corresponding partial interval. Depending on the value of the bit to be coded L and R are updated in the third step. In the forth step the probability estimation is updated depending on the value of the just coded bit and finally the code interval R is subjected to a so-called renormalization in the last step, i.e. R is for example rescaled so that the condition R∈[2b−2, 2b−1] is fulfilled. Here, one bit is output with every scaling operation. For further details please refer to [10].
The main disadvantage of an implementation, as outlined above, now lies in the fact that the calculation of the interval width RLPS requires a multiplication for every symbol to be coded. Generally, multiplication operations, in particular when they are realized in hardware, are cost- and time-intensive. In several research documents methods were examined to replace this multiplication operation by a suitable approximation [11] [12] [13] [14]. Hereby, the methods published with reference to this topic may generally be separated into three categories.
The first group of proposals for a multiplication-free, binary arithmetic coding is based on the approach to approximate the estimated probabilities PLPS so that the multiplication in the second step of FIG. 1 may be replaced by one (or several) shift and addition operation(s) [11] [14]. For this, in the simplest case the probabilities PLPS are approximated by values in the form of 2−q with the integer q>0.
In the second category of approximative methods it is proposed to approximate the value range of R by discrete values in the form (½−r), wherein r ∈{0}∪{2−k|k>0, k integer} is selected [15] [16].
The third category of methods is only known from the fact that here any arithmetic operations are replaced by table accesses. To this group of methods on the one hand the Q-coder used in the JPEG standard and related methods, such as the QM- and MQ-coder [12], and on the other hand the quasi-arithmetic coder [13] belong. While the latter method performs a drastic limitation of the number b of bits used for the representation of R in order to obtain acceptably dimensioned tables, in the Q-coder the renormalization of R is implemented so that R may at least approximately be approximated by 1. This way the multiplication for determining RLPS is prevented. Additionally, the probability estimation using a table in the form of a finite state machine is operated. For further details please see [12].