1. Technical Field
The present invention relates to decompression in data compression systems with decoder side-information. More specifically, the present invention relates to decompression in data compression systems in which the decoder side-information includes a plurality of signals each of which is correlated to the source which is to be decompressed.
2. Description of Related Art
Data compression and decompression with decoder side-information is of practical interest in several applications. These include, but are not limited to, low complexity media coding, scalable and error-resilient data transmission, transmission of media and text over distributed and peer-to-peer networks, compression of sensor network data video, storage of biometric data etc. Data compression systems which utilize decoder side-information are commonly termed Wyner-Ziv coding systems. A typical Wyner-Ziv system includes an encoder which compresses a source signal, and a decoder which decodes the source signal with the help of one or more correlated signals, termed the decoder side-information. The case where more than one correlated signal is present at the decoder as side-information is termed the multi-hypothesis decoder side-information coding case.
FIG. 1 depicts a Wyner-Ziv coding system. The system includes an encoder 100 and a decoder 111. The input to the encoder is the source signal X 101, which is to be compressed and communicated to the decoder. The source signal 101 is passed through a lossy source coder 102 which, typically, converts the input signal into a quantized signal 104 whose samples take values from a discrete set of integers. As an example, in the case of a video Wyner-Ziv encoder, the source to be compressed is the current video frame, and the lossy source coder 102 first transforms the data using a discrete cosine transform, and uses a uniform scalar quantizer with a deadzone to convert the transform coefficients into integers. The quantized signal 104 passes through a Slepian-Wolf coder 103. The Slepian-Wolf coder 103 processes the quantized signal and generates a syndrome or parity bitstream 105 which is communicated to the decoder. As an example, the Slepian-Wolf coder 103 may include a good channel coder. The quantized signal 104 is multiplied by the parity-check matrix of the channel code to generate the syndrome bitstream 105. As another example a systematic channel code may be used in the Slepian-Wolf coder 103. The quantized signal 104 is multiplied by the generator matrix of the channel code, and the parity bits generated constitute the party bitstream 105. Typically, the syndrome or parity bitstream includes a plurality of indices drawn from the set of integers or a Galois field.
The inputs to the Wyner-Ziv decoder 110 are the syndrome/parity bitstream 105, and the decoder side-information signals Y, . . . , YJ 113. The Slepian-Wolf decoder 111 processes the syndrome/parity bitstream 105 and the decoder side-information 113 to reconstruct the quantized source signal 114. As an example, in the case of video Wyner-Ziv decoding, the side-information signal may include a previously reconstructed video frame, and the Slepian-Wolf decoder treats the side-information as a corrupted version of the source video frame and may use a soft channel decoding algorithm to correct the side-information. The quantized source signal 114 is passed through the source reconstruction means 112 which converts it into a reconstructed source signal Xr 115 which lies in the same domain as the source signal 101. The source reconstruction may utilize the side-information 113. As an example, in the case of video Wyner-Ziv decoding, the source reconstruction means using an inverse quantizer whose reconstruction points may depend on the side-information 113, and using an inverse discrete cosine transform to reconstruct the source video frame.
When the quantized source signal 114 at the decoder does not match the quantized source signal 104 at the encoder, the Slepian-Wolf decoding is deemed to have failed, and the result is a distorted source reconstruction 115. To avoid Slepian-Wolf coding failure, the rate of the syndrome/parity bitstream 105 (i.e. the number of syndrome or parity symbols) needs to be sufficiently high. However, having a high rate of the syndrome/parity bitstream 105 conflicts with the goal of compression, which is to transmit as low rate a bitstream as possible from encoder to decoder. In general the better the Slepian-Wolf decoder the lower is the rate of the syndrome/parity bitstream needed for decoding without failure, and thus the greater is the achieved compression.
FIG. 2 shows the detailed working of a conventional Slepian Wolf decoding means in the case where the decoder side-information includes two signals Y1 and Y2. As an example, in the case of a video Wyner-Ziv decoder, the decoder side-information may includes two previously reconstructed video frames. The inputs to the Slepian-Wolf decoder 200 are the syndrome/parity bitstream 201 received from the Wyner-Ziv encoder, and the side-information signals Y1 207 and Y2 208. The side-information signals are combined using a fixed linear combination 205 and the linearly combined signal is passed to the probability estimation means 206. The probability estimation means 206 computes the conditional probability P(X|Y1,Y2) 209 of the source signal, conditioned on the computed linear combination. The syndrome/parity bitstream 201 and the conditional probability distribution 209 are both input to the soft channel decoder 202. The output of the soft channel decoder is an a-posteriori probability distribution Q(X) 203 of the quantized source signal. The a-posteriori probability distribution Q(X) 203 is passed through a likelihood threshold means 204 which computes the most probable value of the quantized source signal based on Q(X). The computed most probable source signal value is output as the quantized source signal 210.
One limitation of the Slepian-Wolf decoding method described above is that it is inefficient in terms of the syndrome/parity bitstream rate needed for Slepian-Wolf coding to occur without failure. This is because the soft channel coding requires the probability estimate P(X|Y1, . . . , YJ) for best compression efficiency, i.e. for correct decoding with the minimum possible syndrome/parity bitstream rate. However computing P(X|Y1, . . . , YJ) is, typically, infeasible since it includes computation of the high-dimensional probability function P(X, Y1, . . . , YJ), and that would need more samples than are typically available at the decoder. Consequently the conventional Slepian-Wolf decoding method described above uses the probability function P(X|a1Y1+ . . . +aJYJ) where a1+ . . . +aJ=1 as an approximation to P(X|Y1, . . . , YJ), as shown in FIG. 2 for the case where J=2. This approximation, however, is often not very good and thus the Slepian-Wolf decoder needs high syndrome/parity bitstream rate for correct decoding. This results in poor compression performance.
Therefore, a need exists for an improved method for Slepian-Wolf decoding needing a small syndrome/parity bitstream rate to provide Slepian-Wolf decoding without failure.