Most computer applications (hereinafter “applications”) are very complex systems that, due to their complexity, require significant testing to ensure that the application will execute as desired.
To facilitate the testing of applications, test cases or test suites (essentially a collection of test cases) are designed, implemented and used to test a portion or the whole of an application (often referred to as the subject under test). In many applications, these test cases manipulate the external facade or interface of the subject under test. The results of these test cases are then analyzed and evaluated. As many applications are quite complex, several, sometimes hundreds, of test cases are used for the testing of a single application.
For example, a database application may need to be tested to determine whether data can be added to the database (this is the test case). A test script would need to be created to implement the test case. The exemplary test script could include several steps, instructions or processes to test this aspect of the application including: gaining access to update the database, transmitting the update request; receiving confirmation that the update request has been executed; reading from the database to determine if the data in the update request was stored successfully; and then logging off from the database.
The processes or steps within a test script are executed linearly. Interspersed amongst these steps are one or more verification points which are designed to gather data representative of the operation of the subject under test. A verification point, when inserted into the test script, will issue or output a binary value (usually a single bit of data—e.g., a boolean value) that indicates whether a step(s) with which the verification point is associated was successfully or unsuccessfully completed. The output of the test script execution, which includes the values output by the verification points, is typically stored in a test case execution log.
The verification points also enable testers to analyze the test case execution log to ascertain which processes in the test case failed and, thus, which portions of the subject under test need to be investigated to rectify any problems (i.e., solve any bugs in the application, if necessary).
As a result of the data output by the verification points, a test script execution will generate an output which indicates whether the application successfully or unsuccessfully performed the test case (the test output).
If a tester desires to vary the linearity of the execution model associated with the test script, then the test script is exposed by the testing tool and the tester manually modifies the test script (using, typically, a language proprietary to the testing tool being used). Unfortunately, this type of modification is not usually further supported by the testing tool. That is, the testing tool will execute the modifications and provide a mechanism to run and modify the test script, but the test tool does not provide any advanced tools which operate in conjunction with the modifications.
As a consequence of the ability to make manual modifications, many advanced testers use testing tools to create only a skeleton of a test script and add the “guts” or substance of the test script manually. This type of use leads to considerable costs for maintaining the test scripts.
Known to the inventors are testing tools which help users (e.g., developers, testers, etc.) create, track and manage the test scripts. These testing tools provide users with essentially two constructs: the test script; and the test suite. The test suite as noted above, is essentially a grouping of test scripts. The test suite does not actually test the application per se, but simply calls or invokes the individual test scripts which, as a group. form the test suite. This grouping of test scripts enables a test suite to provide testing of meaningfully larger set of tests which can be applied to the subject of under test. While these constructs or organization of test scripts have been, for the most part, satisfactory, these constructs are now presenting users with significant shortcomings. The inventors have noted that as the testing of ever more complex applications becomes more common, these constructs have difficulty in scaling (i.e., difficulty in the constructs' ability to serve a larger number of users or more complex applications without breaking down or requiring major changes in procedure) and providing the ability to efficiently create complex testing structures.
For example, present systems, to produce complex structures, users link previously created test scripts to create a test suite or copy portions of one or more existing test scripts to create a new test script (often through simple “cutting and pasting” operations). However, the inventors have noted this procedure of linking previously independent test scripts (or portions thereof) often results in important interdependencies between instructions that existed in the previously created test scripts (or portions thereof) that impact the outcome of the test being broken, lost or over looked by the user. Additionally, the inventors have also noted that interdependencies which impact the outcome of a test are often inadvertently or unknowingly created when test scripts (or portions thereof) are linked together. Handling these interdependencies to ensure that problems are not created to invalidate or detrimentally impact the outcome of the test require significant reliance on the skill, expertise, knowledge and diligence of an individual user.
In a further shortcoming, the present testing environment promotes the creation of subtly different copies of a test script (or portion thereof) to be created by different users. These slightly different copies, resulting from the manual modifications which are made, detrimentally impact the efficient testing of an application, the analysis of the test results and the identification of defects which require some form of remediation.
Accordingly, addressing, at least in part, some of the shortcomings described above is desired to improve the testing of computer applications is desired.