Video information requires a large amount of storage space, therefore video information is generally compressed before it is stored. Accordingly, to display compressed video information which is stored, for example, on a compact disk read only memory (CD ROM), the compressed video information must be decompressed to provide decompressed video information. The decompressed video information is then provided in a bit stream to a display. The bit stream of video information is generally stored in a plurality of memory storage locations corresponding to pixel locations on a display, the stored video information is generally referred to as a bit map. The video information required to present a single screen of information on a display is called a picture. A goal of many video systems is to quickly and efficiently decode compressed video information so as to provide motion video.
Standardization of recording media, devices and various aspects of data handling, such as video compression, is highly desirable for continued growth of this technology and its applications. One compression standard which has attained wide spread use for compressing and decompressing video information is the moving pictures expert group (MPEG) standard for video encoding and decoding. The MPEG standard is defined in International Standard ISO/IEC 11172-1, "Information Technology--Coding of moving pictures and associated audio for digital storage media at up to about 1.5 Mbit/s", Parts 1, 2 and 3, First edition Aug. 1, 1993 which is hereby incorporated by reference in its entirety. Other standards include a Joint Photographics Experts Group standard (JPEG) and a Consulting Committee for International Telegraphy and Telephony standard (CCITTH.261).
Pictures within the MPEG standard are divided into 16.times.16 pixel macroblocks. Each macroblock includes six 8.times.8 blocks: four luminance (Y) blocks, one chrominance red (Cr) block and one chrominance blue (Cb) block. The luminance blocks correspond to sets of 8.times.8 pixels on a display and control the brightness of respective pixels. The chrominance blocks to a large extent control the colors for sets of four pixels. For each set of four pixels on the display, there is a single Cr characteristic and a single Cb characteristic.
For example, referring to FIG. 1, labeled prior art, a picture presented by a typical display includes 240 lines of video information in which each line has 352 pixels. Accordingly, a picture includes 240.times.352=84,480 pixel locations. Under the MPEG standard, this picture of video includes 44 by 30 luminance blocks or 1320 blocks of luminance video information. Additionally, because each macroblock of information also includes two corresponding chrominance blocks, each picture of video information also includes 330 Cr blocks and 330 Cb blocks. Accordingly, each picture of video information requires 126,720 pixels or 1,013,760 bits of bit mapped storage space for presentation on a display.
There are three types of pictures of video information which are defined by the MPEG standard, intra-pictures (I picture), forward predicted pictures (P picture) and bi-predicted pictures (B picture).
An I picture is encoded as a single image having no reference to any past or future picture. Each block of an I picture is encoded independently. Accordingly, when decoding an I picture, no motion processing is necessary. However, for the reasons discussed below, it is necessary to store and access I pictures for use in decoding other types of pictures.
A P picture is encoded relative to a past reference picture. A reference picture is a P or I picture. The past reference picture is the closest preceding reference picture. Each macroblock in a P picture can be encoded either as an I macroblock or as a P macroblock. A P macroblock is stored within a 16.times.16 area of a past reference picture plus an error term. To specify the location of the P macroblock, a motion vector (i.e., an indication of the relative position of the picture with reference to the past reference picture) is also encoded. When decoding a P picture, the current P macroblock is created with the 16.times.16 area from the reference picture. The macroblock from the reference picture is offset according to motion vectors. The decoding function accordingly includes motion compensation, which is performed on a macroblock, in combination with error (IDCT) terms, which are defined on a block by block basis.
A B picture is encoded relative to the past reference picture and a future reference picture. The future reference picture is the closest proceeding reference picture. Accordingly, the decoding of a B picture is similar to that of a P picture with the exception that a B picture motion vector may refer to areas in the future of the reference picture. For macroblocks that use both past and future reference pictures, the two 16.times.16 areas are averaged. When decoding a B picture the current B macroblock is created with the 16.times.16 areas from the past and future reference pictures. The macroblocks from the reference pictures are offset according to motion vectors.
Pictures are coded using a discrete cosine transform (DCT) coding scheme which encodes coefficients as an amplitude of a specific cosine basis function. The DCT coefficients are further coded using variable length coding. Variable length coding (VLC) is a statistical coding technique that assigns codewords to values to be encoded. Values of high frequency of occurrence are assigned short codewords, and those of infrequent occurrence are assigned long codewords. On the average, the more frequent shorter codewords dominate so that the code string is shorter than the original data.
The ISO/IEC 11172-2 standard stipulates for intra-coded luminance and chrominance macroblocks that dct.sub.-- recon[m][n], the matrix of reconstructed DCT coefficients of a block, shall be computed by any means equivalent to the following procedure:
______________________________________ for(m=0; m&lt;8;m++) { for(n=0;n&lt;8;n++) { i=scan[m][n]; dct.sub.-- recon[m][n]=(2*dct.sub.-- zz[i]* quantizer.sub.-- scale*intra.sub.-- quant[m][n]/16; if((dct.sub.-- recon[m][n] & 1)==0) dct.sub.-- recon[m][n]=dct.sub.-- recon[m][n]- Sign(dct.sub.-- recon[m][n]); if(dct.sub.-- recon[m][n]&gt;2047) dct.sub.-- recon[m][n] = 2047; if (dct-recon[m][n]&lt;-2048) dct.sub.-- recon[m][n]= -2048; } dct.sub.-- recon[0][0]=dct.sub.-- zz[0]*8; if((macroblock.sub.-- address - past.sub.-- intra.sub.-- address&gt;1)) dct.sub.-- recon[0][0]= (128 *8)+dct.sub.-- recon[0][0]; else dct.sub.-- recon[0][0]=dct.sub.-- dc.sub.-- X.sub.-- past + dct.sub.-- recon[0][0]; dct.sub.-- dc.sub.-- X.sub.-- past=dct.sub.-- recon[0][0]; ______________________________________
In this procedure, m identifies the row and n identifies the column of the matrix. Scan[][] is a matrix defining a zigzag scanning sequence. Dct.sub.-- zz[] is a zigzag-scanned quantized DCT coefficient list. Each dct.sub.-- zz[] matrix is associated with a particular block. Quantizer.sub.-- scale, which may be specified in a header for each macroblock, is a number used to calculate DCT coefficients from the transmitted quantized coefficients. Intra.sub.-- quant[][] is a intracoded picture quantizer matrix that is specified in a sequence header. Past.sub.-- intra.sub.-- address as the macroblock.sub.-- address of the most recently retrieved intra.sub.-- coded macroblock within a slice. (Pictures are divided into slices. Each slice consists of an integral number of macroblocks in raster scan order.)
Similarly, the ISO/IEC 11172-2 standard stipulates for inter-coded macroblocks that dct.sub.-- recon[m][n], the matrix of reconstructed DCT coefficients of a block, shall be computed by any means equivalent to the following procedure:
______________________________________ for(m=0;m&lt;8;m++) { for(n=0; n&lt;8; n++){ i=scan[m][n]; dct.sub.-- recon[m][n]=(((2*dct.sub.-- zz[i])+ Sign(dct.sub.-- zz{i}))* quantizer.sub.-- scale*non.sub.-- intra.sub.-- quant[m][n])/16; if((dct.sub.-- recon[m][n]& 1)==0) dct.sub.-- recon[m][n]=dct.sub.-- recon[m][n]- Sign(dct.sub.-- recon[m][n]); if(dct.sub.-- recon[m][n]&gt;2047) dct.sub.-- recon[m][n]=2047; if(dct.sub.-- recon[m][n]&lt;-2048) dct.sub.-- recon[m][n]=-2048; if(dct.sub.-- zz[i] == 0) dct.sub.-- recon[m][n]=0; } ______________________________________
Non.sub.-- intra.sub.-- quant[][] is the non-intra quantizer matrix that is specified in the sequence header.
For a video system to provide a motion video capability, compressed video information must be quickly and efficiently decoded. One aspect of the decoding process is variable length code (VLC) decoding. A time-critical operation in VLC decoding is parsing of the VLC bitstream. Highly efficient parsing and decoding are critical for high performance motion video capability. A technique for providing such highly efficient parsing and decoding is necessary.