Providers of media information strive to provide reliable service to their clients. To this end, providers will commonly attempt to configure their head-end infrastructures such that the infrastructures will satisfy the demands of the clients. Providers may perform this task by proactively analyzing the theoretical demands that will be placed on the infrastructure. Providers may also perform this task by empirically measuring the load placed on the infrastructure and the ability of the infrastructure to successful handle the load. In either case, the providers may make periodic changes to the infrastructure to is satisfy evolving client demands.
FIG. 1 shows a system 100 comprising a conventional head-end infrastructure 102 that provides media service to a cluster of client modules (104, 106, . . . 108) via a conventional broadcast mechanism 110. Such conventional infrastructure 102 and broadcast mechanism 110 may comprise cable transmission systems, terrestrial antenna transmission systems, satellite transmission systems, and so forth. It is often a relatively straightforward task to design such a conventional head-end infrastructure 102 that satisfies client demand. In part, the manageability of this task ensues from the fact that the conventional system 100 has well defined and stable characteristics that can readily be determined and modeled. For instance, the head-end infrastructure 102 typically provides a single type of output with stable characteristics that can readily be modeled.
Further, the head-end infrastructure 102 typically uses a coupling mechanism that serves the sole purpose of disseminating broadcast information; due to its proprietary nature, the coupling mechanism has relatively reliable and predictable characteristics that can readily be modeled.
Moreover, the design of the head-end infrastructure 102 is, in large part, independent of the functionality used by the client modules (104, 106, . . . 108) and the activities of the client modules (104, 106, . . . 108). For example, consider the classic case where the head-end infrastructure 102 disseminates the broadcast 110 for selective reception by the client modules (104, 106, . . . 108) when the client modules (104, 106, . . . 108) tune to different channels provided by the broadcast 110. The head-end infrastructure 102 must provide functionality which is capable of broadcasting information that can be received by the cluster of client modules (104, 106, . . . 108) spread over a defined geographic area, but otherwise, the design of the head-end infrastructure 102 does not need to directly account for the number of client modules (104, 106, . . . 108) using its services. Further, much of the behavior of the client modules (104, 106, . . . 108) does not directly impact the head-end infrastructure 102. For example, in the traditional case of broadcast via terrestrial antenna, the tuning behavior of the client modules (104, 106, . . . 108) does not require any interaction with the head-end infrastructure 102, so that the design of the head-end infrastructure 102 need not take account for this behavior. All of the above simplifying factors allow the provider to readily design a head-end infrastructure which meets prescribed requirements.
The situation becomes more complex with infrastructures that transmit media information over digital coupling mechanisms in streaming fashion. This increased complexity may ensue from a variety of factors. For instance, streaming environments have characteristics that are more varied and complex compared to traditional systems, and are therefore more difficult to model. For instance, streaming-type infrastructures may provide multiple types of output profiles having different characteristics. Moreover, streaming-type infrastructures may use public coupling mechanisms (e.g., public TCP/IP packet networks, such as the Internet) that do not enjoy the same level of reliability or predictability as conventional proprietary cable or satellite coupling mechanisms.
Moreover, the client modules in a media streaming environment will require more complex interaction with the head-end infrastructure compared to the case of conventional broadcast transmission. This means that the design of the head-end infrastructure must take into account the functionality used by the client modules and the activities of the client modules. Consider, for example, the case of a video-on-demand (VOD) application, where the head-end infrastructure needs to provide individualized service to requesting client modules over a TCP/IP network. By virtue of such unicast service, the number of the client modules that happen to be simultaneously requesting service obviously impacts the load placed on the head-end infrastructure, and therefore must be considered in the design of the head-end infrastructure. Due to all of these factors, the design of the head-end infrastructure poses considerably more challenges compared to the traditional scenario outlined with respect to FIG. 1.
The art has not adequately met the above-identified challenges. Typically, planning strategies resort to various types of simplifying assumptions. Consider the scenario 200 of FIG. 2 which describes a general approach to providing services over a digital packet network. Here, a provider has deployed a collection of working modules (202, . . . 204) and a collection of idle standby modules (206, . . . 208). The provider sets the infrastructure up such that the working modules (202, . . . 204) are deployed in an active mode to provide services to a cluster of client modules (not shown). On the other hand, the provider maintains the standby modules (206, . . . 208) in an inactive mode. In this inactive mode, the standby modules (206, . . . 208) remain idle, meaning that they are not available to provide services to the client modules in this mode. As indicated by the arrow in FIG. 2, upon the failure of a working module (e.g., working module 202), the infrastructure replaces this working module with one of the standby modules (e.g., standby module 208), which then assumes the role of the failed module. Regarding scaling considerations, a provider might use various upper bound (worse case) estimates to determine the number of modules to deploy. A particularly aggressive approach requires that enough working modules be provided to handle the case in which all of the client modules are attempting to access the same resource at the same time.
The above strategies have significant shortcomings. Use of idle modules (206, . . . 208) is inefficient because the idle modules are not actively engaged in providing services to the users while in standby mode. Also, there is typically an appreciable lag time required to substitute a standby module for a failed working module. This lag time may equate to a lapse in service experienced by the client modules.
Using worse-case upper bound scenarios to determine the number of working modules to deploy is likewise wasteful. Namely, the probability that certain upper bound scenarios may occur is typically extremely low. This means that a system which provides enough working modules to handle these upper bound scenarios can be expected to greatly under-utilize its processing resources, on average.
There is accordingly an exemplary need in the art for more effective and efficient techniques for configuring and deploying information-dissemination infrastructure.