1. Field of Invention
The present invention relates in general to the field of computing and more specifically to systems and methods for enabling more optimal user interface testing to be conducted, particularly in the presence of intensive workflows.
2. Description of the Background Art
While nearly all computer programs include a user interface (UI), testing of the UI remains problematic—particularly in the presence of intensive workflows. One reason is that the number of possible input combinations for exhaustive testing of intensive workflows is extremely large. A workflow having n steps with each step having X potential input combinations represents a workflow having Xn possible input combinations. A large X will therefore render fully exhaustive testing impracticable. Unfortunately, while conventional testing approaches address various specific issues, they fail to consider intensive workflows, let alone address their impact. Similar problems may also arise with regard to other testing as well.
One Risk-Based Module Testing approach, for example, calculates a confidence level for each software module. The approach relies on historical data, other metrics and other application specific knowledge to produce a risk based testing strategy incorporating resulting confidence levels. Such reliance on historical data and previous test experience, however, is only applicable to regression phases. Its suitability quickly diminishes for new products or functional testing phases where such knowledge is unavailable. Assigning a confidence level for every test case is also particularly impractical where the number of test cases or input combinations is extremely large.
Another White Box Priority approach attributed to Intel operates on application components at the application program interface or API or white box level and applies a PGO concept using profile data from previous test runs. Unfortunately, while perhaps appropriate for particular preliminary unit test runs by a development team, applying component or API level priority becomes increasingly problematic for larger test cases.
A further Compuware approach may also be applicable for smaller test cases in which a small number of input combinations is applicable. In this approach, testing activities for distributed applications are prioritized according to assigned risk. However, this approach also does not consider larger test cases, greater numbers of input combinations or the difficulties they present. Prior test data may also be required for such assignment as with the above testing approaches.
Yet another Rational Manual Tester tool provided by Intel includes a manual test authoring and execution tool. The tool provides for reusing test steps, thereby reducing the impact of software change on testers and business analysis. However, while the testing tool appears to operate as an effective test execution tracker, larger test cases are not even considered, let alone accommodated.
Each of these approaches is also directed at only module operation defects according to an existing intensive understanding of where such defects have tended to arise. Such approaches may therefore fail to provide comprehensive yet practical operational testing, which may begin to explain the numerous errors encountered by users, among still further problems.
Accordingly, there is a need for a software testing system and methods that enable one or more of the above and/or other problems of existing software testing to be avoided.