Most traditionally, when receiving a request from a user (or a server at a previous stage), a computer system performs desired processing in response to the request and returns a result of the processing to the user (or at the server of the pre-stage) as the response. Below, the processing to return a response to a request is referred to as “synchronous processing”. In the computer system which performs the “synchronous processing”, the system management based on a response time is effective. As for the management of the “synchronous processing”, the systems management can be performed even if only the monitoring is performed over the response time.
However, in the computer system in recent years, processing is not always performed only until it returns a response after receiving a request. In the computer systems in recent years, processing except for “synchronous processing” is sometimes performed. Processing other than “Synchronous processing” (that is, processing except for processing relating to the request which is performed until returns a response after receiving a request) is referred to as “asynchronous processing”.
For example, “post processing” is contained in the “asynchronous processing”. The “post processing” is processing which is performed after returning a response to a user (to a server at the pre-stage) when the server processes a request. The post processing is known as a technique for response improvement when a great deal of processing efficiently performed (that is, shortening the response time). If execution of the post processing is properly scheduled, the load of a server can be balanced.
FIG. 1 is a diagram showing the “post processing” which is performed in a computer system which adopts a server network having a 3-layer structure of a Web server, an application server, and a database server. In the computer system with the configuration of FIG. 1, the Web server receives a Web request, and transmits a request to the application server, if necessary, to call the application server. When receiving the request, the application server transmits a request to the database server to call the database server, if necessary. The database server performs necessary processing in response to the received request, and returns a response. Here, the database server performs a piece of processing which is permitted to perform after a response, among pieces of processing relating to the request received from the application server, after returning the response. That is, the database server performs the “post processing”. For example, an example of the “post processing” in the database server is described in “Capacity Planning for Client Server Database Systems: A Case Study” (Proceedings of the 1995 International Workshop on Computer Performance Measurement and Analysis, pp. 110 to 117) by A. Tanaka et al. In this paper, it is described that an internal operation of a database server Oracle (registered trademark) is composed of Shadow and LGW processes which return a response to a user and DBwriters executed asynchronously.
When the response is returned from the database server to the application server, the application server performs necessary processing by using the received response and returns a response to the Web server. Here, the application server performs a piece of processing that is permitted to perform after the response, among pieces of processing relating to the request received from the Web server, after returning the response. That is, the application server performs “post processing”. Also, an example of the post processing performed by the application server is described in “JBoss Introduction” (Gizyutuhyouron-sya, p. 202) by H. Minamoto. In EJB (Enterprise Java (registered trademark) Bean) which is a realization example of the application server, the “post processing” is performed by using a function of a message driving type Bean, as described in the above literature.
Moreover, when a response is returned from the application server to the Web server, the Web server performs necessary processing by using the received response and returns a response to the user. Here, the Web server performs a piece of processing that is permitted to perform after the response, among pieces of the processing relating to the received Web request, after returning the response. That is, the Web server performs the “post processing”.
Moreover, Japanese Patent Application Publication (JP 2003-280931A) disclosed a technique to perform processing asynchronously to an executable component, separating from an on-line transaction operation. This patent literature refers the effect of the distribution of system load by improvement in a response time and proper scheduling of execution timing.
Processing such as virus check program processing and screen saver processing, which are not performed ordinarily but are performed with no relation to request processing is listed as “asynchronous processing” except for the above-mentioned “post processing”. Below, the processing which is not performed ordinarily but is performed with no relation to request processing is referred to as “burst processing”. A server resource is used when the “burst processing” is performed even if any request is not received.
In the computer system that “asynchronous processing” such as “post processing” or “burst processing” is performed, it is not enough to monitor a response time. In the computer system which does not perform the “asynchronous processing”, because there is strong correlation between the response time and a resource use rate, it is possible to manage the system sufficiently by monitoring only the response time. However, in the computer system which performs the “asynchronous processing”, the correlation between the response time and the resource use rate is weak and the management based on the response time is not sufficient. For example, in a computer system in which “post processing” is performed, when a large amount of pieces of “post processing” are left without being performed, there is a case that the pieces of the post processing are performed suddenly. In such a case, even if the response time is monitored, the proper management of the server resources cannot be performed. In addition, because the “burst processing” is not processing to be performed in response to a request, the monitoring of the response time is not often effective.
In a case of operation management of s general server, an operation is performed to have a margin with respect to a use rate and use amount of the server resource. That is, an upper limit of the use rate and use amount of the server resource is suppressed to a value which is lower than the original ability of the computer system. For example, in a case of operation of the computer system, an upper limit of a CPU use rate is not set to 100%, and it is set to 60 to 80% to cope with an unexpected situation. The upper limit is one of important setting parameters which influence the ability of the computer system. If the upper limit is set to a lower value, the unexpected situation can be coped with but the effective utilization of the server resource cannot be achieved. Oppositely, when the upper limit is set to a higher value, the unexpected situation cannot be coped with. The upper limit is set based on the operation experience and the feeling in many years, but this method is lack for objectivity.
In order to manage a server properly in the operation, it is desirable to manage according to the existence or non-existence of “asynchronous processing”. As a technique for detecting the existence or non-existence of occurrence of the “asynchronous processing”, a technique for analyzing communication relation between processes operating on the computer system and judging whether processing is performed synchronously or asynchronously is known, as described in Japanese Patent Application Publication (2003-157185). In the technique, the relation between the processes is grasped by using timing charts. This technique contains problems that a measuring mechanism of the exclusive use called a software probe needs to be embedded in an operation system and it is difficult to apply it to the system in a stable operation, considering a bug. Also, a cost for collecting a great deal of event trace data is required and it is difficult to apply the technique to the system in the operating state.