The emergence of the next generation network (NGN), which is a burgeoning network technology, brings about more functions and advantages to communication service and also significantly increases workloads of networks. However, since SIP application servers of the next generation network communication perform relatively poor, they cannot meet requirements of the NGN.
To improve the performance of the SIP application server, a high-performance SIP stack is essential. Multicore-based “scale-out” technology is one of ways to improve the SIP stack performance, which improves the SIP stack performance by increasing processing components. Multicore technology implements “scale-out” by integrating multiple complete computer cores within one processor, and multiprocessor technology implements “scale-out” by integrating multiple processors on a server. Additionally, each of multiple processors in multiprocessor technology may have one or more cores.
To improve the server performance by using multicore-based “scale-out” technology, problems of how to make full use of computing resources of each core and of how to minimize the interference and resource contention between cores should be solved. Traditional multicore-based “scale-out” technologies mainly comprise a design approach termed “pipeline”, a design approach termed “go through” and so on.
FIG. 1 illustrates a schematic view of the go through design approach. As illustrated in FIG. 1, a SIP stack 100 is partitioned into several layers including a transport layer 101, a parsing layer 102, a transaction layer 103, a dialog/session layer 104, and an application layer 105. According to the go through design approach, each core performs functions of all layers of the SIP stack. In this approach, all cores need to share such resources as transaction table, session table, dialog table, timer, I/O queue, etc. when processing SIP messages. FIG. 6 illustrates respective schematic views of a transaction table, a session table and a dialog table, which transaction table, session table and dialog table are used for storing information needed for processing SIP messages, such as states of transaction, session and dialog and related SIP information. Timer and I/O queue are also resources needed for processing SIP messages. However, when a core is accessing shared resources such as transaction table, session table and dialog table (such as executing an operation of searching, creating, editing, or deleting), the resources will be locked, so other cores cannot access the resources and have to wait. Additionally, in case of a relatively large throughput of SIP messages, cores might have to wait to use timer I/O queue, etc. Therefore, there is the resource contention problem among cores. As the number of cores increases, the resource contention gets more serious so that the overall server performance cannot be effectively improved with the increase in the number of cores.
Similarly, there is also another approach, namely the pipeline design approach. As illustrated in FIG. 2, a SIP stack is also partitioned into several layers including a transport layer 101, a parsing layer 102, a transaction layer 103, a dialog/session layer 104 and an application layer 105. Then, each layer is allocated several cores. This pipeline design approach achieves “scale-up” to an extent, whereas it can hardly dynamically balance the workloads among layers. Therefore, it is hard to make full use of computing resources of each core. More importantly, when a certain layer (e.g. transaction layer 103) is allocated several cores, each core in this layer also need to share certain resources such as transaction table, session table, dialog table, timer, I/O queue, etc. However, when a core is accessing shared resources such as transaction table, session table and dialog table (such as executing an operation of searching, creating, editing, or deleting), the resources will be locked, so other cores cannot access the resources and have to wait. Additionally, in case of a relatively large throughput of SIP messages, cores might have to wait to use timer, I/O queue, etc. Therefore, similar to the go through design approach, the pipeline approach is subjected to the resource contention problem among cores.
Therefore, there is currently no satisfactory solution in the prior art.