On the live network that is deployed on the communication network, a service processing resource on each network element is configured according to the maximum service peak value. However, the service quantity that needs to be actually processed is not always the peak value and changes with time. Therefore, a processing resource of each network element is not enough and is in idle state most of the time, which affects performance and causes a great waste of performance and power consumption. To eliminate the waste, the communication system starts to use a method for tracing the service load (L) to dynamically adjust a processing capability rank (Rn) of a service processing resource.
First, a series of multiple processing capability ranks R0, R1, . . . , Rn are set for a processing resource. The processing resource provides processing capabilities L0, L1, . . . , Ln that meet the service load of a specified rank for each processing capability rank. Service processing capability sequences L0 to Ln corresponding to processing capability ranks R0 to Rn are discretely distributed in ascending order. The rank spacing, division mode, and processing capability of each processing capability rank are determined by the hardware structure and working principle. For example, the quantities of processing resources of different processing capability ranks are in proportion to the numbers or combinations of hardware such as processing boards, channels, and chips in working state. That is, the higher the configured processing resource capability is, the more the required resources are. Therefore, when the processing resource has a lower processing capability rank Rn−m, excess processing capabilities can be disabled or in standby power consumption state to save energy.
The current method for scheduling a service processing resource is to set a processing capability rank of a processing resource based on service quantities and after the processing capability rank is created and the service quantities are classified. For example, a prediction algorithm is used to obtain the predicted service quantity L—prediction in the next time period based on service quantities in previous time periods. Then, the processing capability rank, R—next of the processing resource, is set according to L—prediction.
Because services on the communication network fluctuate inevitably, to offset service fluctuation on the service quality, certain processing resource capability needs to be reserved, which is often set according to the experience data. The reserved processing resource capability is generally converted to the service quantity ΔLextra. After an interval, when Ln>(L—prediction+ΔLextra)>Ln−1, the processing capability of the processing resource in the next time period is set according to the formula R—next=Rn.
However, the existing ΔLextra is set to a fixed value according to the experience data. Because of differences between usage scenarios and network elements, the following problems may occur after the next time period is reached. Assume that the current service quantity is L—current and the actual service quantity fluctuation is ΔL. The first problem is that the reserved ΔLextra is not enough. For example, when the current processing capability rank of a processing resource is R1, L—current+ΔLextra<L1, (L—current+ΔLextra) may exceed the load capability L1 of the processing capability rank R1. As a result, the processing capability is not enough and the service suffers from loss. The second problem is that the reserved ΔLextra is too much. When the current processing capability rank of a processing resource is R2, though L—current+ΔL<L1, but due to L—current+ΔLextra>=L1, the processing capability rank of the processing resource is set to R2 according to the algorithm, rather than the more proper processing capability rank R1. The processing capability rank of the processing resource is set to R2 according to the algorithm, but not the more proper processing capability rank R1. As such, there is a great waste of the processing capability.