1. Technical Field
This disclosure relates to testing graphical user interface (GUI) applications using test scripts, and in particular relates to systems and methods for creating test scripts that are reusable and/or adaptable for testing different GUI applications and/or different versions of GUI applications.
2. Related Art
The relentless pace of advancing technology has given rise to complex computer software applications that help automate almost every aspect of day-to-day existence. Today, applications exist to assist with writing novels to filing income tax returns to analyzing historical trends in baby names. One nearly ubiquitous feature of these applications is that they employ graphical user interfaces (GUIs). GUIs implement graphical windows, pointers, icons, and other features through which users interact with the underlying program. A program implemented with GUIs is referred to as a GUI application (GAP). GAPs require thorough testing prior to release.
In the past it has been easier to implement the GUI to the application than to thoroughly test the GAP. For GAPs of any significant complexity, the permutations and combinations of GUI elements gives rise to an enormous field of potential commands and command sequences that could have bugs of any severity, from insignificant to critical failure. Thus, GAPs must be thoroughly tested to ensure that the GUIs interact with the user as intended. Manually testing large-scale enterprise GAPs is tedious, error prone, and laborious. As an alternative to manual testing, test engineers develop test scripts to automate GAP testing.
Test scripts include navigation statements and logic statements. The navigation statements access and manipulate or retrieve properties of GUI objects, while the logic statements determine whether the GAP is functioning as intended. When executed, these test scripts drive the GAPs through different states by mimicking the activity of users interacting with the GAPs by performing actions on the GUI objects. Test scripts process input data, set values of GUI objects using the data, act on the GUI objects to cause the GAP to perform computations, access other GUI objects to retrieve computation results, and compare the outcome with the expected results. Many different test scripts must be written to test the different GUIs and functions of a GAP. As an example, testing a travel reservation GAP will require different test scripts to test the different GUI objects that are displayed as a user navigates through the GAP to book the departure flight, reserve a hotel and/or automobile, book the return flight, and make other travel arrangements. One test script may determine whether the GAP displays correct return date options in response to a user selecting a specific departure flight, while another test script may determine whether the hotel reservation dates are correct in response to the same user selection. To thoroughly test the travel reservation GAP, many more test scripts must be written.
Although determining whether correct dates are displayed is a ubiquitous test applicable to many different types of GAPs, test scripts (e.g., the travel reservation test scripts) are not transportable to test other types of GAPs because the logic statements are intertwined with the GAP-dependent navigation statements in order to access and test the GUI objects within the GAP. Also, test scripts are difficult to update when GAPs are modified (i.e., different versions of the same GAP) because the navigation statements that must be rewritten are scattered among many different test scripts. Test engineers have found that test scripts are not easily transportable even between different versions of the same GAP and in most cases prefer writing new test scripts from scratch over modifying existing test scripts.
There are additional obstacles to generating test scripts that are transportable across different GAPs or different versions of the same GAP. In one method of generating test scripts, capture/replay tools are used to record mouse coordinates and user actions. However, because capture/replay tools use mouse coordinates, changing the GUI layout, even slightly, will usually render the test scripts ineffective. Another method of generating test scripts, referred to as “testing with object maps,” captures the values of properties of GUI objects (rather than just the mouse coordinates). Test engineers assign unique names to collections of the values of the properties of the GUI objects, and then use the names in test script statements to reference the objects. In theory, changes to a GUI layout can be accounted for by modifying the values of the properties of the GUI objects, which are usually stored in an object repository. However, updating GUI tests that are based on object maps is difficult, if not prohibitive, when even small changes to a GUI are made because of the interdependencies explained below.
Navigation And Manipulation Expressions (NAMEs) are the expressions used in test scripts to navigate to GUI objects, set or retrieve the values of the GUI objects, or act on them. NAMEs include application programming interface (API) calls having objects that hold the values of the properties of the GUI objects being tested. Different testing frameworks export different API calls to access and manipulate the GUI objects. Thus, NAMEs are dependent on the GUI object type (e.g., list box, text box, etc.), the location of the object on the screen, and the underlying GUI testing framework. Because NAMEs reference GUI objects by their properties, even the slightest change to a GUI object can invalidate all NAMEs within test scripts that reference the GUI object. For example, changing a GUI object from a combo box to a text box will, almost invariably, invalidate all NAMEs in the original test scripts that reference the GUI object. The interdependence between NAMEs and testing logic renders test scripts hardwired to specific GAPs and testing frameworks. Transportability of test scripts is further exasperated because GUI object creation is dependent upon the underlying GUI framework, which may differ between different GAPs. For these reasons, test scripts based on NAMEs, to date, have not been reusable even between GAPs that have the same functionality, thereby obliterating a potential benefit of test automation.
Additional difficulties in testing GAPs exist because three “type systems” are involved: the type system of the language in which the source code of the GAP is written, the type system of the underlying GUI framework, and the type system of the language in which the test script is written. If the type of the GUI object is modified, the type system of the test script “will not know” that this modification occurred, which complicates the process of maintaining and evolving test scripts. Test scripts do not contain any typing information in them. They do not use the type system of the GUI framework, which is not a part of the scripting language interpreter, and they do not have access to the type system of the programming language in which the GAPs are written. Because of the absence of type systems within test script languages, programmers cannot detect errors statically, obtain adequate documentation, and maintain and evolve test scripts effectively.
For all of its limitations, test script based testing, as compared to manual testing, results in an overall reduction in labor for testing GAPs. To help further reduce the labor of testing GAPs, test engineers create models of GAPs and generate the test scripts by using tools that process the modeled GAPs. Model-based testing includes building high level models of GAPs and implementing algorithms that construct test cases. However, this modeling process for generating test scripts is not without significant limitations. For example, building high level models of GAPs is laborious and difficult and there are obstacles to building models directly from the source code of the GAPs. For one, the values of variables of GUI objects are known only at runtime, i.e., in conjunction with the execution of the API calls. Thus, GUI models cannot be derived from source code alone. Also, deriving models from the source code would require (a) knowing the semantics of API calls that create and manipulate GUI objects, (b) developing tools that extract GUI models from GUI resource repositories, and (c) knowing the GUI application language. Currently, there are tens of thousands of combinations of (a), (b), and (c), making it difficult to develop a universal approach to deriving GUI models. What's more, the source code of a GAP is usually not made available to the independent testing organizations that are contracted to test proprietary GUI software. Thus, there are significant challenges to model-based-test-script generation.
There are several obstacles that prohibit GAP testing using other techniques. For one, because GUI objects are created dynamically, i.e., only when the GAP is executed, GAPs cannot be tested statically, such as by examining the GAP source code. Also, because a test script is run on a platform that is external to the GAP platform, GUI objects cannot be accessed as programming objects that exist within an integrated program. And because complete specifications of GUI objects are usually not available, it is difficult to analyze statically how GUI objects are accessed and manipulated by NAMEs.
Therefore, a need exists for a GAP testing structure that implements readily modifiable and reusable test scripts.