1. Field
A user system configured to request and receive data divided into chunks from a content source over a network via secure and unsecure paths
2. Description of the Related Art
Caching of content is an important technique in the efficient running of telecommunications networks. Caching, the storing of data within a network along (or near) the path that it takes from the source to the user, reduces the total resources required to deliver content by avoiding multiple transmissions of the same data along the portion of the path between the content source and the user. The resource savings gained by caching increase the closer the cache is to the user and so the most effective caches, in terms of network resource efficiency, are located at the network edge, close to the user.
The aim of network managers is to use these near-edge caches as much as possible. However, it is also necessary for the content to be protected from unauthorized access and so near-edge caches cannot always be used.
Much of the content that is being delivered over networks is subject to rights management, in which content owners wish to restrict access to the content to certain users and to not allow open access to all users. This means that the content owner needs to exert control over the delivery of the content from all sources, including any content that has been cached within the network (or sent via an unsecure path without a cache). As most network caches are not able to apply access policies, protected content is generally not cached and will be retransmitted through the network for each user. This results in inefficient use of the network's resources.
One approach to allow caching of protected content is exemplified by Content Delivery Networks (CDNs). CDNs place surrogates of the content owner's services within a network. These surrogates act on behalf of the content owner to optimize delivery of protected content by caching content. The surrogates provide more services than caching. These additional services, including authorization and accounting, enable the content owner to trust the CDN to deliver protected content to users that are authorized. A CDN is usually backed by a formal legal arrangement between the network provider and the content owner.
CDNs are effective at optimizing content delivery and are essential to the efficient running of networks. However, the complexity of the surrogate system, such as the communication required for user authorization and the need for a formal legal arrangement, mean that the scope for their implementation is limited. There are usually just a few instances of CDN surrogates within a network and so they are placed far from the network edge, meaning that there is still a large scope for optimization of content delivery.
The standard technique to authorize users and allow them access to content, as is revealed in the prior art highlighted below, involves the creation of tokens by the content owner. These tokens are presented by the user to the cache in order to claim the right to access cached content. This means that there has to be a trust relationship between the content owner and the CDN cache. Furthermore, the CDN cache must understand the form and meaning of owner tokens and the owner must trust the cache to process the tokens correctly.
Content caching in CDNs provides a partial solution to the problem of delivering content efficiently. Although the CDN caches are located closer to the user than the content owner's servers, there is still a large portion of the network between the CDN and the user. This is because the network interconnectivity reduces closer to the user and the network takes on the characteristics of a mathematical (directed) tree, branching into isolated regions with little connectivity between the regions. There are many of these regions near the leaves (users) and so the cost of placing highly managed CDN caches in all of these regions is prohibitive.
However, the economics of caching are sufficiently effective for it to be worthwhile to place simple, lightly managed caches very close to the network edge. Indeed some new network architectures propose the placement of simple caches at every network switch. The delivery of protected content would be greatly enhanced if these caches could be exploited. These caches cannot be exploited using state of the art techniques, since they do not have the required software to manage user authorization. They may be controlled and accessed by parties that have no relationship with the content owners and so will not be trusted by the content owners.
One additional issue with lightly managed caches is the possibility of content being altered, possibly maliciously, during transmission. In the state of the art, content integrity can be assured in two ways. First, the connection itself can be protected so that any data transferred over the connection cannot be altered. However, this technique requires a distinct connection for each user's access and thus removes the ability to cache content. Alternatively, the content owner can send a token derived from the content, possibly with a secret shared between the content source and the content consumer (such as a hash). The consumer can test the token against the received content in order to determine integrity. However, open caches and multiple users mean that this technique is not feasible as both the token and the content can be modified en route.