Information Technology (IT) has become an integral part of modern organizations. Often, the IT infrastructure that supports critical business activities is continuously monitored to ensure the health of the IT infrastructure. Many organizations have a large volume of log data generated by various monitoring tools. An IT service provider can offer high-value services that shed light on the health of a customer's IT infrastructure by analyzing their log data. Typically, however, the logs only contain limited high-level monitoring information. Most existing performance analysis tools use intrusive approaches to instrument a running system and gather detailed performance data needed for their specific analysis. It is a challenge to perform and provide accurate analysis in a non-intrusive way.
In an IT infrastructure based on Service Oriented Architecture (SOA), functional capabilities of a computing component are externalized via one or more service interfaces such as WSDL-specified Web Services. Driven by the demand for business agility and return-on-investment optimization, various dynamic service discovery and composition technologies have been proposed with a common goal of enabling business aligned fulfillment of customer service requests. Workflow technologies, for example, can dynamically build a composite service from atomic services through the use of control flows such as sequential, branch, parallel and loop. SOA helps to dynamically construct composite services out of a set of low-level atomic services to satisfy customer requests. Providing quality-of-service (QoS) guarantees in such a dynamic environment is an asset for achieving the success of SOA. This requires the knowledge of how composite services and atomic services consume IT resources. Understanding these services' demand for system resources, e.g., CPU, would be of great help in capacity planning and resource provisioning.