An important part of the development of software systems is testing. Because software systems can involve millions of lines of source code in separate modules or routines which must interact, testing is necessary before a system can be shipped, so as to confirm that a given system performs as expected under various configurations and with various inputs. Oftentimes, extensive testing at different development levels, and under a wide variety of testing conditions, helps developers feel confident that the system is unlikely to exhibit unexpected behavior when used by consumers.
Different types of software system testing are used at different stages in development. For example, source code is tested at compile time for syntactic and logical errors before being compiled into executable code. Or, system implementations, either in part or in whole, are tested by users manually affecting inputs and configurations of the system to test against expected outputs. In yet other examples, this testing is automated, using a separate software module or application to automatically run software through batteries of tests.
Software testing is often performed with reference to a specification of behaviors for the software system being tested. This is done, for example, when the software development process involves development of a behavioral specification before a system implementation is created by writing code. By testing the implementation against the behavioral specification, errors which have been introduced during the coding process can be identified and corrected.
The behavioral specification that underlies testing may include static and/or dynamic aspects. It may give actions as static definitions that are invoked dynamically to produce discrete transitions of the system state. In this case, the specification is often called a model program. Or, the specification may define possible transitions dynamically. In this case, the specification may be called a labeled transition system, a finite-state machine (“FSM”) or a method sequence chart.
One important distinction in software testing is between glass-box and black-box testing. In typical glass-box testing, a test developer or automated testing software module has access to the source code for a particular module, library, or application being tested and can insert code into the implementation in order to affect execution of the implementation or receive information during execution. In this way, the code can be tested at whatever level of specificity the test developer desires. By contrast, in typical black-box testing, a tester or testing software application can only manipulate a particular system implementation through the interfaces the system presents to a user or to other pieces of software. This provides an experience closer to that of a customer, and allows the tester to focus on the ways the implementation will perform once it becomes a product.
Conformance testing is a common method of black-box testing based on an executable behavioral specification and some correctness criteria. This kind of testing checks that an implementation of a software system conforms to its system specification by executing the implementation in a test environment that is aware of the states and transitions envisioned by the specification (which predicts the correct behavior of the system). Conformance testing of this type is often known as “model-based testing.” Model-based testing may utilize FSMs.
FSMs may be created, for example, from abstract models of software systems, including the above-mentioned model programs, or even diagrams. Methods also exist for constructing FSMs from abstract state models (“ASMs”) of a tested software system. “Generating a Test Suite from an Abstract State Machine,” U.S. Patent Application Publication No. 2003/0159087, published Aug. 21, 2003, describes (among other things) how to generate a finite sequential behavior encoded in a FSM from a potentially infinite model, given as an ASM.
Although the techniques described in the referenced patent publication work well for many applications, for some software systems, there is a complication. Modem software systems often utilize a design pattern where, from a main thread of control working on a problem, asynchronous threads are spawned which work on sub-problems concurrently to the main thread. At certain execution points, these concurrent threads generate asynchronous callbacks, which notify the main thread of progress or completion of the sub-problem. Testing of this kind of behavior is challenging because the callbacks from the asynchronous activity can happen at arbitrary points of time. Additionally, if several callbacks are expected, those callbacks can happen in an undetermined order, because of the non-determinism of thread execution. Modeling this complex type of behavior can be difficult.
What is needed are tools and techniques that facilitate the creation of FSMs for systems with asynchronous callback behavior.