Internet applications that employ audio and video streaming are becoming increasingly prevalent. (When used herein, the term “audio” will be intended to include speech as one example of an audio signal.) As a natural consequence of transmitting and receiving data over a data packet-based network such as the Internet, when network traffic is large the network gives rise to relatively large packet delays. In particular, packet delays usually vary considerably depending on the momentary level of network congestion. Moreover, data packets are sometimes even lost completely by the network. Since applications which employ audio and video streaming are typically used in non-interactive environments, however, the end-to-end delay is usually not critical.
For these reasons, and as is totally familiar to those of ordinary skill in the art, data packets from such streaming applications are usually buffered at the receiving end over a time period which may typically be several seconds in duration. This buffering helps to reduce the detrimental effects of the relatively large and variable packet delays which result from the varying levels of network congestion. Packet losses in the network are typically addressed by using a forward error correction code across the packets, as is also fully familiar to those skilled in the art. The error correction capability of such an error correcting code typically improves with the size of the data packets.
Clearly then, a large receive buffer is highly desirable to provide a better quality signal, because it increases the probability that most of the transmitted packets representing data within the given (i.e. the buffered) period of time will have been successfully accumulated in the buffer before it is necessary to decode them for “playback.” However, since the receive buffer usually needs to be initially filled before the signal can be decoded, a large buffer necessarily gives rise to a correspondingly large buffering delay, and, in particular, a large start-up delay. Start-up delays of a few seconds can be quite annoying, especially when a channel switch is made in an Internet broadcast environment. Such an environment typically involves an Internet backbone which broadcasts many independent programs, and a number of users which receive their individually selected program via a server connected to the backbone. A large start-up delay could thus be quite bothersome when a user changes the selected broadcast program. It would be highly desirable, therefore, to provide a source coding and receive data buffering scheme which results in more acceptable start-up delays without sacrificing the benefits of using a large receive buffer. In this manner, relatively painless channel switches may be effectuated while still maintaining high quality steady-state performance.