Web Services are quickly becoming the common method of communication for loosely coupled distributed applications of varying languages. Web Services are self-contained, self-describing, modular applications that can be published, located, and invoked across the Internet, or across an intranet. Their programmatic interfaces can be described using Web Services Description Language (WSDL), an XML format for describing network services as a set of end points operating on messages containing either document-oriented or procedure-oriented information. The Web Services protocol is vendor and implementation independent, and therefore quickly gaining popularity as a promising approach for communication between loosely coupled distributed applications.
Many examples of Web Services used in tutorials or in technology previews typically involve only one node or service point. However, the number of nodes in Web Services configurations will grow significantly, since building tools make it very easy to compose these systems. Further, workflow systems typically interpret higher level workflow designs using a workflow engine, producing a significant amount of Web Services traffic, and increasing the complexity of the systems. As the number of nodes and the complexity of these applications grow over the coming years, it will become more challenging for developers to understand, debug, and optimize these large applications. Therefore, the major challenge for each of these complex systems will be to find where an error occurred or where the flow of messages stops in the system.
Existing debugging and problem determination techniques are largely based on “micro-analysis tools,” such as code-level debuggers and simple trace logs, see, e.g., M. Chen et al., “Using Runtime Paths for Macroanalysis,” HotOS IX—Ninth Workshop on Hot Topics in Operating Systems, Lihue, Hi., May 2000. While these tools are very useful for understanding behavior within a single component or thread of control, they do not provide useful information beyond these boundaries. When applications are composed of services that are distributed across a network, failures in a particular service may not manifest themselves at the service in question, from the point of view of service consumers. Additionally, failures may also occur as a result of network degradation, and the interaction of several services may be complex.
In order to overcome these and other difficulties, problem determination requires macro-level tools. These tools are capable of handling distributed component or services-based systems. They also provide the ability to correlate information from autonomous distributed components, and generally provide a view of the application from the distributed multi-threaded perspective.
Current tools such as SOAPScope, from Mindreef, Inc. of Hollis, N.H., work mostly on the wire level, “sniffing” the Web Services communication between two points. This communication is usually implemented using Simple Object Access Protocol (SOAP). This can help for logging the activity at one particular node, but debugging a complex distributed application remains difficult. SOAPTest, from Parasoft of Monrovia, Calif., has functional testing, load testing, and client testing. SOAPTest uses the classic testing strategy: after a suite of tests has been run, the user is left to determine what to do with the results. The testing phase and the inspection/debugging phases are completely separate.
Adaptive software testing has also been conducted, see, e.g., K. Y. Cai, “Optimal Software Testing and Adaptive Software Testing in the Context of Software Cybernetics,” Information and Software Technology, 44 (14), November 2002, 841-855. The adaptive approach is based on the use of control theory, in which the Implementation Under Test (IUT) serves as the control, and the software testing system serves as the controller. However, Cai's approach specifies the use of controlled Markov chains for managing the adaptation strategy.
Considerable work has also been done in the area of test generation, including work on the use of fault models and planning, see, e.g., A. Paradkar, “Automated Generation of Self-Checking Function Tests,” International Symposium on Software Reliability Engineering, 2002; finite-state-based techniques, see, e.g., A. Hartman et al., “TCBeans and Software Test Toolkit,” Proc. 12th Intl. Software Quality Week, May 1999; techniques based on combinatorial optimization, see, e.g., C. Williams et al. “Efficient Regression Testing of Multi-Panel Systems,” International Symposium on Software Reliability Engineering, 1999; and model-checking-based techniques, see, e.g., P. E. Amman et al., “Using Model Checking to Generate Tests from Specifications,” Proc. 2nd IEEE Conference on Formal Engineering Methods, 1998.