Video distribution to end users over the internet is gaining momentum. Most peak-hour internet traffic is used for video content delivery. Internet video streaming sites and services, such as Netflix™, Hulu and YouTube™, are very popular. This video traffic adds a very high burden on the network infrastructure, to a point where the network is congested and the Quality of Experience (QoE) to the user is degraded.
One solution for lowering the load on the internet, and thereby reducing network congestion, is the caching of video data. The most popular titles viewed by users are cached and replayed on demand. The recommended location of the cache is as close as possible to the end user, in order to minimize the transmission distance of the content from the source server.
Commercial media caches are usually implemented on standard computing servers by software applications running on processors, such as Intel x86 architecture-based. The media files are usually stored as-is on standard storage devices and accessed on request to retrieve the entire file. This implementation suffers from performance limitations which are the result of computing overhead, required for abstraction of the hardware for the programmer. Such an overhead is added, for example, by an operating system layer.
Cached files are stored on one main storage device or an array of devices. Memory devices can be categorized according to their access speed and the cost per bit. There are memory elements, such as the embedded on-chip flip-flop circuits, which are highly accessible but with very high cost per bit. Other existing memory elements, from the high-access-speed/high-cost to the slow-access-speed/low-cost are: embedded SRAM, external SDRAM devices (such as DDR), NAND Flash, PCIe NAND Flash, solid state drives (SSDs) and Hard Disk Drives (HDDs). In existing computer servers, main storage memory is usually implemented with a single HDD or an array of HDDs.
In the field of video compression, a video frame is compressed using different algorithms, each having different advantages and disadvantages related to degrees of data compression, accuracy, and processing requirements. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B. I-frames, Intra coded pictures, are the least compressible but don't require other video frames to decode; P-frames, Predicted pictures, can use data from previous frames to decompress and are more compressible than I-frames; and B-frames, Bi-directionally predicted pictures, can use both previous and forward frames for data reference to get the highest amount of data compression.
Greene U.S. Ser. No. 07/770,198B discloses techniques for detection of repeated video content to reduce an amount of high bandwidth traffic transmitted across a network from a video source device to remote subscriber devices. In particular, Greene's invention relates to a first intermediate device capable of recognizing patterns of video content and sending a communication to a second intermediate device that transmits a cached version of the video content. In this way, the first intermediate device does not have to resend the raw, high-bandwidth consuming video content over the network. The network may comprise elements of any private or public network.
The aforementioned technologies are part of a trend to respond to ever-increasing demands for the communication of video data across far-flung networks. More recently, real-time internet streaming has become a venue of choice for consumers of high-definition (HD) video information, which can quadruple or increase by a factor of 10 the volume of video data to be transmitted.
Therefore, there is a long-felt and unmet need for a system that would ease the burden on the network infrastructure by managing and directing cached files.