The present invention relates to a method and apparatus for allowing JPEG images to be displayed on a DVD player. In particular, the present invention provides a technique whereby progressively encoded JPEG images may be sequentially decoded. This feature is desirable, as many consumers are switching to digital cameras and storing large numbers of JPEG images on DVDs or CD-ROMS. These discs can be mailed to family and friends, or can be played back on a DVD player to create a slide show for neighbors and guests.
A DVD player can be designed to read these JPEG files, display the images on a TV screen, and navigate the images using a simple user interface via the DVD player remote control. One problem in enabling such a feature is that JPEG images may be encoded in one of two formats: progressive and sequential scan formats. FIG. 1A illustrates the gradual fade-in effect of a progressive scan JPEG image as it is rendered on a display. FIG. 1B illustrates the line-by-line construction of a sequential scan JPEG image as it is rendered on a display.
Referring to FIG. 1B, sequential scan JPEG images are decoded one line or row at a time, and during the decoding process, each successive line may be displayed as it is decoded. Since the television display displays scan lines in a similar manner (from top to bottom), displaying sequential scan encoded JPEGs is not difficult to accomplish and does not require very intensive processing or large amounts of buffer memory. As each line or row is decoded sequentially, only a one or two row buffer(s) are required. Once a row of data is decoded, the row may be transferred into the existing DVD frame buffer and the next two lines decoded.
In FIG. 1B, the three successive views illustrate how the image is drawn on a display, one row at a time. Each of the three successive views in FIG. 1B represent how the image may look at arbitrary successive times during the drawing process, as the image is drawn from top to bottom. As the name implies, the sequential JPEG is drawn sequentially in a continuous process.
On the other hand, progressive scan encoded JPEG images decode the image one scan or layer at a time, so that the image appears faint and out of focus initially, and then slowly improves in resolution, as illustrated in FIG. 1A. Progressive scan JPEGs allow a user to see what the image is all about while it is being decoded. Unlike the images in FIG. 1B, the three scans in FIG. 1A, Scan1, Scan2, Scan3, represent distinct scan operations, whereby the image is reproduced, first as a faint fuzzy image (Scan1) then coming into focus (Scan2) until the complete imaged appears (Scan3). This progressive scan technique uses layers of scans to create an image progressively. Although three scans are illustrated here, other numbers of scans may be used in a progressive scan JPEG image.
In the era of dialup modems, the progressive scan JPEG is more useful, as a user might not want to wait for a 2 MB image to download from the Internet only to discover the image was not the one they were looking for. Of course, today, with high-speed connections available, the need for progressive scan may be questionable, as a JPEG image may appear almost instantaneously (regardless of scan type). Today, most images are encoded using sequential scan JPEG encoding.
Regardless, however, in order to provide complete JPEG support in a DVD player, a manufacturer should support both JPEG types, as both are part of the JPEG standard and consumers may wish to display images that are progressive scan encoded. Thus, a DVD player manufacturer should provide support for both types of JPEGs if they want to advertise full JPEG compatibility for their DVD player. The JPEG standard is set forth in more detail in International Telecommunication Union CCITT T.81 ISO/IEC 10918-1 1993-E (09/92) “TERMINAL EQUIPMENT AND PROTOCOLS FOR TELEMATIC SERVICES, INFORMATION TECHNOLOGY—DIGITAL COMPRESSION AND CODING OF CONTINUOUS-TONE STILL IMAGES—REQUIREMENTS AND GUIDELINES, Recommendation T.81”, incorporated herein by reference in its entirety.
The problem with progressive scan encoded images, is that in the Prior Art, a huge buffer was required to decode the image using conventional techniques, such as the standard software described in the JPEG standard incorporated by reference above. The buffer would have to store scan data for the entire image, and thus an image-sized buffer (memory space) was required to decode progressive scan JPEG images.
On a home PC, memory buffer may not be an issue, as memory (RAM or drive space) is readily available, and reducing the number of processor operations was (at least historically) more of a priority. However, for a consumer DVD player, such memory requirements may be a bit more onerous. A 1024×1024 buffer that is used only for JPEG decoding is not an efficient use of resources. In a consumer DVD player, trading off memory requirements for processor operations may be preferable.
In the Prior Art, most manufacturers bit the bullet and installed a large JPEG decoding buffer to decode the progressive scan JPEGS. This solution adds to the expense of the overall design, as a larger SRAM is required. Also, this large buffer takes away memory available for other applications in the system.
FIG. 2 is a block diagram of a Prior Art progressive scan encoder 220 as set forth in the ISO/IEC 10918-1 1993-E document previously incorporated by reference. FIG. 2 illustrates the main procedures for all encoding processes based on a Discrete Cosine Transform (DCT). In the prior encoding process, the samples of input source image data 160 may be grouped, for example, into 8×8 pixel blocks known as Minimum Coded Units (MCU), and each block may then be transformed by the forward DCT (FDCT) 250 into a set of 64 values referred to as DCT coefficients. One of these values is referred to as the DC coefficient and the other 63 as the AC coefficients. Each of the 64 coefficients is then quantized in quantizer 240 using one of 64 corresponding values from a quantization table 170.
After quantization, the DC coefficient and the 63 AC coefficients are prepared for entropy encoding in entropy encoder 230. FIG. 1C illustrates how difference encoding may be used on the DC values. The previous quantized DC coefficient is used to predict the current quantized DC coefficient, and the difference is encoded. The 63 quantized AC coefficients undergo no such differential encoding, but are converted into a one-dimensional zig-zag sequence, which is illustrated in FIG. 1D and described in more detail in the ISO/IEC 10918-1 specification previously incorporated by reference.
The quantized coefficients are then passed to an entropy encoder 230, which compresses the data further. One of two entropy coding procedures can be used. If Huffman encoding is used, Huffman table specifications 180 may be provided to the encoder. If arithmetic encoding is used, arithmetic coding conditioning table specifications 180 may be provided. Otherwise, default conditioning table specifications 180 may be used. Entropy encoder 230 outputs compressed image data 110 which may be in the form of a progressive scan JPEG image file.
FIG. 3 illustrates the main procedures for all DCT -based decoding processes. Each step shown performs essentially the inverse of the corresponding procedure of the encoder of FIG. 2 as discussed above. Encoded (compressed) image data 110 is fed to the DCT based decoder 320. Entropy (Huffman) decoder 330 decodes the zig-zag sequence of quantized DCT coefficients. After dequantization in dequantizer 340, the DCT coefficients are transformed to an 8×8 block of samples by the inverse DCT (IDCT) 350 to produce the reconstructed image data 360.
The MCU (Minimum Coded Unit) may comprise a number of blocks from each component present in the image. The number of blocks from each component in a MCU may be decided based on the component ratio in the image (e.g., the ratio between luminance Y components and chrominance difference U,V components). In a worst-case scenario, the number of blocks may be 4×4 groups of blocks (the top two pixels in one row and the bottom two pixels in a next row) for each component in the image. Since the Huffman decoding scheme does not decode from a block, which is in the middle of an MCU, but rather from the start of MCU blocks, then it is easier to decode the MCU once and place the data in the two rows of the buffer. The use of a single row buffer is possible, but would require that each MCU be decoded twice. However, the process is already processor intensive as it is.