1. Field of the Invention
The present invention relates to delivering and presenting high quality, media based content, to users or processes over networks, with assured Quality of Service (QOS) even if insufficient network bandwidth is available.
2. Description of the Background Art
The inventors have recognized that even with advances in network technologies, delivering rich, high quality experiences will remain a challenge. In particular, delivering large media assets—whether they be audio, video, flash, games, data or other digital media formats—often requires more network bandwidth/throughput than is available. For instance, in the case of audio and video, a high bit rate asset can only be delivered in real time if a user's effective bandwidth is at least equal to the asset's bit rate, otherwise the result is a sub-optimal user experience complete with stutters, stops, and content buffering.
On the other hand, a large game executable may not have the same real time constraints (or required quality of service) as a video, however downloading the asset requires a significant amount of time and overhead for the user, even on the fastest networks. While a number of “download managers” on the market will take care of this for the user, a content provider may wish to intelligently and adaptively manage the download of assets to the user device (e.g., a computer, a set-top box with memory and/or processor, a device) in an elegant and transparent manner, without needing the attention of the user.
Given this, there is a need to manage and deliver large, high quality media assets to users using their limited bandwidth in a time shifted manner. That is, there is a need to be able to unobtrusively deliver content to users via available bandwidth and idle cycles, so that when the high quality content is needed, it is readily available on demand and an uncompromised user experience is rendered. This in turn provides the illusion that the user has more effective bandwidth than is actually available. To this end there is also a need for this technology to integrate seamlessly into delivery and presentation platforms (including but not limited to web browsers, flash and other platforms) and content publishing systems. The present invention achieves this and other functionalities and also overcomes the limitations of the prior art.
For ease of understanding, the following definitions will apply throughout this application; however, no definition should be regarded as being superceding any art-accepted understanding of the listed terms.
Glossary:
1. Throughput—The amount of data transferred from one place to another in a specified amount of time. Typically, throughputs are measured in kbps, Mbps and Gbps.
2. Quality of Service, QOS—The term that specifies a guaranteed throughput level.
3. Client Process—The process on the client that receives cache/display management directives or hints from a server process and then executes directives to bring the cache current state in line with desired state and may trigger one or more notifications to users or other process's as it does so.
4. Cache—A store of assets with “known” availability or QOS. A cache in this context is an asset storage mechanism where the QOS meets the content requirements and is in general higher than the medium used to acquire the assets. State changes within the cache may result in notifications.
5. Server Process—Provides the client with the information required for the client to manage the state of the cache. In its most simple implementation it is similar to a quasi dynamic server generated play list. More elaborate implementations (all of ours) also provide control directives for the client to inform other process's of progress against specific sets of assets.
6. Expiration date—expiration date of asset, and indicates when the asset should be removed from the local cache.
7. Callback URL—a URL that is retrieved once the asset item has been downloaded.
8. Client-side Token—a token or cookie to set when the item has been downloaded. This allows a client or server application to determine the presence of an asset on the local system.
9. Embargo Date—This indicates the latest date the asset will be used.
10. Delete—This indicates that the asset is to be marked for explicit deletion (to override the expiration date). This allows for retraction of an asset.
11. Refresh rate—determines how often a client checks an asset list for changes.
12. Resource path—the network location of any number of resources associated with the asset list.
13. Media Assets—at least one of a text, audio, video, or binary file/data.
14. Item—a single media file.
15. Link—URL for the media file.
16. hitCountUrl—URL to ping after the file has successfully downloaded. A parameter, duration, will be appended to the end of the URL indicating, in seconds, how long the download took.
17. helpUrl—URL for the client process help that is to be displayed when the user selects help menu item.
18. trackWithCookie—optional element that, If present, indicates this asset will be added to the list of assets in the cookie specified by cookieName.
19. cookieName—name of the cookie that lists all downloaded assets that have the trackWithCookie element present. This cookie is essential for ad serving so the ad server knows which ads have been downloaded. The format of the cookie will consist of the name only of the files (no extension or path) separated by commas.
20. cookieDomain—domain on which to set the downloaded assets cookie, Multiple domains can be specified if separated by semi-colons or commas.
21. /regserver—registers the ActiveX controls with the system and adds the client process to the startup folder.
22. /shutdown—stops another running instance of the client process, if present.
23. /unregserver—unregisters the ActiveX controls and removes the shortcut from the startup folder. Also stops the running instance of clientprocess.exe and removes COM object registry entries.
24. CDN—Content Distribution Network. A federated group of content servers owned and operated by a 3rd party. In general practice a CON service provide additional capacity using a highly decentralized collection of servers.