The present invention relates to streaming of video data through a network.
Recent advances in computing and networking technology have popularized delivery of video data over the Internet. The Internet is a global internetwork of networks. The Internet uses the Transport Control Protocol/Internet Protocol (TCP/IP) for reliably passing data packets from a source node to a destination node.
Streaming video can be used for live or recorded events. If a live event is streamed, it is referred to as a real-time video streaming. On the other hand, if a recorded event is streamed, it is referred to as a non-real-time video streaming. The real-time video streaming technique could be used to broadcast (or multicast or unicast) lectures, sports or entertainment events, and academic or other ceremonies. The non-real-time video streaming technique could be used to broadcast (or multicast or unicast) TV reruns, movies in the form of video on demand, or other previously downloaded or saved video files.
Specific issues need to be addressed to stream video over the Internet since it was not originally designed for video streaming technologies. For example, the Internet is a shared medium and uses a best effort delivery mechanism, Internet Protocol (IP), to deliver content. There is no dedicated path between the source node and the destination node. The IP divides content into a plurality of self contained packets, which are routed independently to the destination node. Limited bandwidth, latency, noise, packet loss, retransmission and out of order packet delivery are all problems that can affect video streaming over the Internet.
In particular, the limited bandwidth of the Internet connection has been one of the main bottle neck in bringing the video streaming technology to the masses. Although computers are increasingly provided with high speed Internet connection exceeding 1 mega bits per second (Mbps), most computers still rely on the dial up connection and have no more than 56 kilobit per second (kbps) connection. Some wireless Internet devices, e.g., cell phones, have significantly less connection speed than 56 kbps. Accordingly, much effort has been invested in inventing improved methods of compressing and streaming video data.
There are a variety of compression systems used today. The Motion Picture Experts Group (MPEG) has at least three open standards that can be used for streaming. The MPEG is a joint committee of the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEG). The MPEG-1 standard, originally developed for VHS quality video on CD-ROM in 1988, has its optimal bit rate at about 1.5 Mbps for quarter screen TV (352×240) at 30 frames/sec. MPEG-1 is mainly considered as a storage format; however, it does offer excellent streaming quality for the bit-rate it supports. The MPEG-2 standard, ratified in 1996, was designed for use in digital TV broadcasting and is best known for DVD encoding. Its target bit-rate is between 4 to 9 Mbps, but it can be used in HDTV for resolutions up to 1920×1080 pixels at 30 frames per second. The MPEG-4 standard, ratified in 1999, is a new standard specifically developed to address Web and mobile delivery. Its optimal bit rate is between 385 to 768 Kbps according to specific implementations. MPEG-4 is directed to new video streaming applications based on very low bit rate coding, such as video-phone, mobile multimedia and audio-visual communications, multimedia e-mail, remote sensing, interactive games, and the like. MPEG-4's lower requisite bit rate makes it more suitable for usage in the Internet environment than prior versions of MPEG.
Like most video compression schemes, MPEG uses both interframe and intraframe compression to achieve its target data rate. Interframe compression is compression achieved between frames by eliminating redundant interframe information. The classic case is the “talking head” shot, such as with a news anchor, where the background remains stable and movement primarily relates to minor face and shoulder movements. Interframe compression techniques store the background information once, and then retain only the data required to describe the minor changes, e.g., facial movements, occurring between the frames.
Intraframe compression is compression achieved by eliminating redundant information from within a frame, without reference to other video frames. MPEG uses the Discrete Cosign Transform algorithm, or DCT, as its intraframe compression engine. By and large, however, most of MPEG's power come from interframe, rather than intraframe compression.