Multimedia such as video and audio can be transmitted over a number of paths, including cable, the Internet, cellular and broadcast. For instance, satellite or terrestrial broadcast stations or cellular systems can be used to transmit multimedia to mobile computing devices such as mobile telephones. The multimedia data can be formatted in accordance with Moving Pictures Expert Group (MPEG) standards such as MPEG-1, MPEG-2 (also used for DVD format), MPEG-4 and other block based transform codecs. Essentially, for individual video frames these multimedia standards use Joint Photographic Experts Group (JPEG) compression. In JPEG, the image of a single frame is typically divided into small blocks of pixels (usually 8×8 and/or 16×16 pixel blocks) that are encoded using a discrete cosine transform (DCT) function to transform the spatial intensity values represented by the pixels to spatial frequency values, roughly arranged, in a block, from lowest frequency to highest. Then, the DCT values are quantized, i.e., the information is reduced by grouping it into chunks by, e.g., dividing every value by 10 and rounding off to the nearest integer. Since the DCT function includes a progressive weighting that puts bigger numbers near the top left corner of a block and smaller numbers near the lower right corner, a special zigzag ordering of values can be applied that facilitates further compression by run-length coding (essentially, storing a count of the number of, e.g., zero values that appear consecutively, instead of storing all the zero values). If desired, the resulting numbers may be used to look up symbols from a table developed using Huffman coding to create shorter symbols for the most common numbers, an operation commonly referred to as “variable length coding”. Other variable length coding schemes can be used as well, including Arithmetic coding. Motion pictures add a temporal dimension to the spatial dimension of single pictures. MPEG is essentially a compression technique that uses motion estimation to further compress a video stream. Other non-block-based encoding schemes such as wavelets, matching pursuits, etc can be used. Other forms of multimedia include audio, graphics, etc.
Internet Protocol (IP)-based principles such as point-to-point protocol (PPP) framing of IP packets can be used to communicate multimedia data, including MPEG data. PPP can be used not only for communicating IP packets over wired portions of the Internet, but also to communicate data over wireless transmission paths to user computers that employ wireless communication principles such as but not limited to code division multiple access (CDMA) technology, GSM, wideband CDMA (WCDMA or UMTS), OFDM and other wireless technologies.
Typically, multimedia data is voluminous, which means that significant transmission path bandwidth, unfortunately a finite resource, must be used. This is particularly the case for high fidelity multimedia, e.g., high resolution video. That is, the higher the quality of service (QoS) provided, the more bandwidth must be used.
As recognized by the present invention, several multimedia streams can be pooled together in a single channel. The channel might have a constant overall bandwidth in terms of bit rate, i.e., the number of bits that can be transmitted in the channel per unit time cannot exceed the “bandwidth” of the channel. Typically, each stream in the channel will be accorded a fixed fraction of the bandwidth. Accordingly, the bit rate for each multimedia stream typically is fixed.
A “base layer” is an MPEG-related term that may be defined as the most important part of the multimedia bit stream which, if successfully received, decoded, and presented to the user, would result in a baseline level of video, audio, or other multimedia stream acceptable to the user. On the other hand, an “enhancement layer” would, when combined with the base layer, enhance or improve the quality, resolution, frequency, signal-to-noise ratio, etc. of the multimedia stream when presented to the user, compared to that of the base layer alone.
With the above discussion in mind, it will be appreciated that in wireless transmission of multimedia to battery powered mobile devices, three goals—efficient bandwidth use, mobile device power consumption, and highest QoS—compete with each other. This is particularly true when one considers that wireless channels are more “lossy” (they experience more lost data) than wired channels. To guarantee some higher levels of QoS, extra bandwidth might be required for retransmission of lost data. The alternative is to accept lost data frames and, hence, reduced QoS. These problems become more severe the further a receiver is from a base station, and with high use channels. As an alternative to retransmission, a software application in a receiver experiencing reduced QoS can attempt to execute advanced error correction schemes, but this in turn drains the battery of the receiver by requiring the RF receiver to be on longer and requiring more complex decoding, and may still result in unacceptably low QoS. Having recognized these problems, the below-described solutions to one or more of them are provided herein.