1. Field of the Invention
The present invention is directed generally to a subscriber management system for a communication network and more particularly to a subscriber management system for a wireless communication network.
2. Description of the Related Art
With the growth and ever-expanding services and capabilities of the Internet and data networking at large, operators of commercial communication networks have increasing needs for effective and efficient methods and systems for managing network subscribers and policies related to their services and applications. These fundamental business management functions include various administrative, accounting, and traffic related management functions that must be performed by a commercial communication network, such as a mobile broadband wireless network. Because mobile broadband wireless networks have inherently less access to bandwidth and increased performance challenges, these management functions are important to providing an acceptable quality of service to customers, and to proper operation of the network.
Most existing network subscriber management strategies were born out of legacy cellular technologies. They tend to focus on a few specific, simple elements of the subscriber management system. They are also based on network architecture models that may not be optimal for some broadband wireless network operators (see FIG. 1). Most existing and emerging strategies are centered on Mobile IP and IP Multimedia Subsystem (“IMS”) architectures, and tend to distribute various subscriber management functions over many different network nodes with similar or overlapping functions. These approaches impose extra network nodes, interfaces, and logical entities into the core routing or switching platforms that may not be optimally suited to performing these tasks. These approaches may be less than optimal for “Greenfield” wireless broadband network operators (i.e., operators of new equipment as opposed to those who operate pre-existing or upgraded equipment) who do not have the same network architecture requirements as operators who need to leverage older technology infrastructure. These issues may result in increased complexity, costs, and scalability limitations for a network operator, while failing to address ever-increasing needs of emerging, dynamic subscriber management requirements.
An exemplary prior art communications network 10 may be viewed in FIG. 1. The communications network includes an architecture that is common in existing 3G cellular and some emerging WiMAX technologies. The communications network 10 includes a radio access network (“RAN”) 14 in which one or more Access-Service-Networks (“ASN”) 12A and 12B are coupled to a core Switch/Router platform 13. The core Switch/Router platform 13 is also coupled to a packet-switched network 16, such as the Internet, external to the RAN 14. The communications network 10 communicates wirelessly with one or more mobile stations (“MS”) 18 each operated by a user.
Each of the one or more Access-Service-Networks (“ASN”) 12A and 12B includes one or more base-stations (“BS”) coupled to an Access-Service-Network-Gateway (“ASN-GW”) node. For example, in the communications network 10 depicted in FIG. 1, the ASN 12A includes a BS 22A and a BS 22B coupled to the ASN-GW node 20A and the ASN 12B includes a BS 22C and a BS 22D coupled to the ASN-GW node 20B. In ASN 12B, the BS 22C and BS 22D are coupled also to a Base Station Controller (“BSC”) 26B. In existing 3G cellular and some emerging WiMAX technologies, it is common to terminate a user data “session” in the ASN-GW node (e.g., ASN-GW node 20A or ASN-GW node 20B) or a Packet-Data-Serving-Node (“PDSN”) (not shown), which is located between the radio access network (“RAN”) 14 and the packet-switched network 16 (e.g., the Internet).
In many implementations, the communications network 10 includes a packet switched portion 24 (e.g., the components and connections connecting the BS 22A-22D to the external packet-switched network 16) that is tightly coupled with the elements communicating using radio signals (e.g., the BS 22A-22D). As a result, the BSs and ASN-GW are often provided by the same equipment vendor. This paradigm is actually a holdover from legacy circuit-switched cellular architectures in which BSs and base-station-controller(s) (“BSC”) (e.g., the BS 22C, the BS 22D, and the BSC 26B) were almost always provided by the same vendor. However, the industry appears to be moving toward a more “open” model in which the radio access functions are decoupled from core routing functions. To ensure that a BS and an ASN-GW from different vendors will work together, this approach requires standardized interfaces and interoperability testing/certification processes. At this time, most vendors have not fully embraced this approach, and it has yet to emerge in real world deployments.
The subscriber management functions performed by the communications network 10 typically include accounting, hotlining, Quality of Service (“QoS”), and Deep Packet Inspection (“DPI”). In FIG. 1, arrows depict control plane interfaces related to these subscriber management functions. Arrows “A” show Accounting interfaces, arrows “B” show hotlining interfaces, arrows “C” show QoS policy management interfaces. The bold black lines “D” show the data plane traffic interfaces.
Accounting functionality includes accounting, charging, and reconciliation. Typically, the communication network 10 will include an Authentication/Authorization/Accounting (“AAA”) server 30 configured to interact with accounting functionality incorporated into other network elements that process subscriber traffic. For example, each of the ASN-GWs 20A and 20B includes an Accounting client 34A and 34B, respectively, configured to interact with the AAA server 30.
Methods of accounting range from simple to very complex. An example of very simple accounting method includes using communication session Start and Stop triggers generated by one or more components of the network (such as the ASN-GW 20A). These triggers are communicated to the AAA server 30, which uses them to determine when the user used the communications network 10 and/or the total amount of time the user used the communications network 10 during a communication session.
Emerging technologies allow the network operator to determine application level details about communication traffic flows. This creates a very rich and flexible accounting environment in which the network operator can bill the customer based on the utilization of services by type, such as movies, shopping, chat, email, etc. As a general rule, the more detailed information a network operator has about its customers and their usage of the communication network 10, the more accurately the network operator can manage its bandwidth.
Hotlining allows the network operator to provide services to a user who is not authorized to access packet data services. A user who was previously authorized to use such services may became unauthorized as a result of a problem or issue, such as nonpayment, inability to pay because of a depleted prepaid account, expiration of a credit card, suspected of fraudulent use, and the like. Such a user may wish to seek reinstatement of data packet services. Alternatively, the unauthorized user may wish to subscribe to such services for the first time (i.e., initial provisioning of a subscribers service.). In either case, the network operator may “hotline” the user for resolution of the problem/issue or to subscribe the new user to data packet services. When the user is hotlined, their packet data service is redirected (by a hotlining function 36A in the ASN-GW 20A and a hotlining function 36B in the ASN-GW 20B) to a Hotline Application (“HLA”) (not shown) that notifies the user of the reason(s) that they have been hotlined and offers them a means to address the reason(s) while blocking access to packet data services.
Quality of Service (“QoS”) refers to service policies applied to subscriber data traffic in the communications network. These policies reference user-specific profiles that tell the network to what type of service-level-agreement the user has subscribed and/or which service(s) the subscriber is authorized to access on the network. These policies are propagated to network elements, which manage service flows and network bandwidth among all subscribers.
QoS elements in prior art communication networks typically include a QoS Policy Manager 40 (which is also frequently referred to as a Policy Function) that manages a centralized QoS Policy Server database (not shown) and related administrative functions pertaining to user-specific QoS policies and rules. The QoS Policy Manager 40 typically interacts with other QoS aware network entities that implement or enforce QoS for subscriber traffic in some portion(s) of the network. For example, the QoS Policy Manager 40 interacts with an Application Manager 28, which manages communications with non-IMS application servers 29, and an IMS/Application service framework 31. As is apparent to those of ordinary skill in the art, non-IMS application servers include servers configured to provide location based services (“LBS”). Such servers are typically internally developed application servers using proprietary interfaces and APIs. In contrast, an IMS server provides an IMS application, such as VoIP, using standard IMS/SIP interfaces and APIs.
On the “northbound” interface, the QoS Policy Manager 40 typically interacts closely with AAA server 30, which usually houses the primary user-specific profile definitions and service authorizations. The QoS Policy Manager 40 translates these profiles into more granular QoS policies that will be applied in the network. On the “southbound” interface, the QoS Policy Manager 40 typically talks to Service-Flow-Authorization (“SFA”) logical entities, such as a QoS SFA function 44A in the ASN-GW 20A of the ASN 12A and a QoS SFA function 44B in the ASN-GW 20B of the ASN 12B. These logical entities typically reside in router/gateway network nodes that process (and terminate) a user's data session. These functions are responsible for applying the QoS policies out to the edge of the network. In a wireless network, this typically means managing the QoS service flows that will be authorized and enabled on the radio links between a BS (e.g., the BS 22A and 22B) and mobile subscribers each operating a MS 18.
Deep Packet Inspection (“DPI”) is a network packet filtering mechanism that examines the data part of a through-passing packet, searching for predefined criteria to decide whether the packet can pass. DPI devices have the ability to look at Layer 2 through Layer 7 of the OSI model, including headers and data protocol structures. The communications network 10 includes a DPI device 50 configured to identify and classify the traffic based on a rules database (not shown) that includes information extracted from the data part of a packet. DPI is normally in the bearer (data) path, and is “transparent” to other network functions. Thus, conventional subscriber management architectures and standards typically do not specify DPI as a required function. For this reason, DPI is illustrated as having a dashed line boarder. However, DPI is typically considered an essential function and is present in virtually every commercial service provider network today.
DPI is being used increasingly by network operators for security analysis and bandwidth abuse purposes. Using DPI, network devices can analyze flows, compare them against policy, and then treat the traffic appropriately (i.e., block, allow, rate limit, tag for priority, mirror to another device for more analysis or reporting, and the like). The DPI device 50 also identifies flows, enabling control actions to be based on accumulated flow information rather than packet-by-packet analysis.
In the conventional architecture model depicted in FIG. 1, three of the four primary subscriber management functions (Accounting, Hotlining, and QoS) described above are implemented at least in part in the vendor-specific ASN-GW. Specifically, accounting is implemented in part in the Accounting Client 34, hotlining in the Hotlining function 36, and QoS in the QoS SFA function 44. This system architecture has many undesirable aspects.
First, this system architecture is inefficient. The large number of arrows “A,” “B,” and “C” (i.e., control interfaces) illustrate the excessive number of control plane interfaces which impose resource limitations on their respective network elements. Many of these interfaces may have a completely different mapping from the primary bearer or data plane traffic flow, which may create network inefficiencies. In other words, packets traveling across the control plane interfaces may travel across different network nodes than the packets traveling across the data plane traffic interfaces.
Second, the system is complex because each of the Accounting Client 34, Hotlining function 36, and QoS SFA function 44 of the ASN-GW 20 has an interface to another network element (e.g., AAA Server 30, QoS Policy Manager 40, and Core Switch/Router platform 13). Having multiple interfaces from many functions within the ASN-GW to multiple elements in the network creates complexity, and increases the processing requirements and memory requirements in both the ASN-GW and the other network elements.
Third, this system makes using RAN components manufactured by more than one vendor difficult. For example, referring to FIG. 1, a RAN vendor “A” may own the components of the ASN 12A which are illustrated shaded gray (i.e., the BS and ASN-GW shaded gray) and a RAN vendor “B” may own the components of the ASN 12B which are unshaded (i.e., the unshaded BS 22C and 22D and the ASN-GW 20B). In this example, the components shaded gray are assumed to have been manufactured by a different company and use a different protocol than the unshaded components. In this scenario, the Accounting Client 34, Hotlining function 36, and QoS SFA function 44 and their associated interfaces to the other network entities are all duplicated. This is an inefficient use of system resources, significantly increases the number of interfaces, further compounds the complexity, processing, and memory requirements of the network.
Fourth, different RAN equipment vendors may use different RAN architectures. As an example, the RAN vendor “A” may implement a RAN with only the BS 22A, the BS 22B, and the ASN-GW 20A. On the other hand, the RAN vendor “B” may implement the BS 22C, the BS 22D, the BSC 26B, and the ASN-GW 20B. Certain subscriber management functions may be incorporated into the BSC 26B instead of the ASN-GW 20B, necessitating additional interfaces within the network 10, which as described above causes related issues.
Fifth, the prior art conventional architectures manage the QoS Policy Manager 40, the DPI device 50, and the Application Manager 28 functions separately and each resides on a separate component (e.g., computer). Also, each of the QoS Policy Manager 40, the DPI device 50, and the Application Manager 28 includes computer hardware and software platforms which must be scaled with growth, each with separate reliability and redundancy factors. All of which further complicate the communication network 10 and the management thereof.
Therefore, a need exists for a simplified architecture for communication systems. A need also exists for a communication system with fewer control interfaces. A further need exists for a communication system that may be implemented using components produced by more than one vendor without duplicating components (and interfaces) as in prior art communication networks. The present application provides these and other advantages as will be apparent from the following detailed description and accompanying figures.