A mobile device accessing a wireless network typically has a choice of several carriers (i.e., several frequency channels) on a node B in which to place its call. A static load balancing may be defined as a network strategy for choosing the best carrier on the node B to initially host a call. For example, a mobile device may be directed to a carrier with the lightest load. Since criteria used to choose a carrier for a mobile device may change with time, dynamic load balancing may be employed to move an in-progress call to a better carrier. For example, a carrier initially chosen for a mobile device may become overloaded with time. The mobile device may then be moved to a carrier with a lighter load while its call is in progress.
Proper distribution of loads amongst carriers is becoming increasingly important and complex. Wireless networks continue to accommodate explosive growth by adding more spectrum (i.e., more carriers), which increases the allowed carrier choices as well as the opportunity for generally optimizing or improving performance. In addition, spectrum allocations within a market continue to be spread amongst widely disparate frequency bands. The carriers available for choice are therefore spread across multiple bands, with significantly different radio channel characteristics. These differences must be considered in any strategy that aims to balance load distribution amongst carriers. For example, moving a mobile device to a new (or target) carrier with lighter load may not be the best solution if the mobile device will experience a poor channel condition on the new carrier.
Strategies for distributing loads among carriers vary within the industry. Many service providers pursue static load balancing only, as the cost of dynamic load balancing is viewed as high when compared with potential performance benefit. For example, the cost of dynamic load balancing may include computational burden of selecting which mobile devices to move, as well as possible performance degradations associated with moving or “handing off” calls between carriers. These degradations may include call drop, longer latency, reduced throughput, and increased channel error.
In some cases, static load balancing may be employed and followed with dynamic load balancing, where dynamic load balancing is typically executed by attempting to find and enforce generally a net optimal load distribution for current calls. Such a load distribution may be an “end-state” that is determined based on several factors, which may include the load and radio characteristics of each carrier. Other factors such as service priorities may also be considered when determining such an end-state. For example, a service provider may favor or prioritize certain carriers for specific types of traffic, e.g., carrier N will handle voice traffic only. As soon as a desirable end-state is computed, the total load may be placed in (or moved towards) this end-state by executing one or more handoffs between carriers (which is referred to as “inter-carrier handoffs”). End-state computation and subsequent execution of handoffs may be based on a trigger that indicates some unbalance of load between carriers, e.g., unequal number of users, or unequal cell transmit power, etc. Computation of an “optimal” end-state may be computationally complex and expensive. For instance, each mobile may need to measure (which is a computation cost) and report (which is a signal overhead cost) radio channel conditions at its location for a number of candidate target carriers in order for the network to compute an optimal solution. These computation and overhead costs add to the other costs (e.g., risk of call drop in an inter-carrier handoff) of dynamic load balancing.
Although specific strategies vary, a common “optimal” end-state may be a load distribution among carriers where the number of users per carrier is equalized. The associated computation costs are minimal and execution of handoff is straightforward. However, this strategy ignores the potentially wide variation in performance of users spread across carriers. As a simple example, for a total of 6 users spread equally across two carriers, there are 20 distinct load distribution solutions allowing 3 users on each carrier. Conventionally, a strategy that targets only equal users per carrier views each of these solutions as equally valid. However, each of the solutions actually results in different carrier performance. For example, the net performance of a solution with users 1, 2, and 3 on carrier A and with users 4, 5, and 6 on carrier B is not the same as that with users 1, 4, and 5 on carrier A and 2, 3, and 6 on carrier B.
A more complex but less common method may attempt to ensure that users close to a cell site (which may be referred to as “near” users) are placed on carrier(s) with higher frequencies, and users far from the cell site (which may be referred to as “far” or “edge” users) are placed on carrier(s) with lower frequencies. This method is based on radio physics that dictate higher signal loss with distance at higher frequencies and lower signal loss with distance at lower frequencies. An end-state in this method thus places far users (i.e., users at greatest distance from the cell site) at a frequency where the radio signals will suffer the least loss over distance. This method attempts to achieve a better performance at the cost of additional complexity resulted from, e.g., the need to identify a boundary between “near” and “far”, and the need to collect and process measurements of radio conditions at current and target carriers for each user equipment (UE). Moreover, the cost of multiple handoffs is not contained and performance is only considered indirectly via the (poor) proxy of radio signal strength.