The approaches described in this section could be pursued, but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, the approaches described in this section are not prior art to the claims in this application and are not admitted to be prior art by inclusion in this section.
Commercial software normally is subjected to testing before release to customers. In some approaches, the testing is done using automatic testing systems. In one approach, an automatic testing system is used to capture and replay a script of mouse clicks or keystrokes for the purpose of testing graphical user interface output.
Alternatively, a test engineer can prepare a custom test (“test script” herein) in a testing language, which may be a scripting language or a conventional programming language. The test scripts instruct the automatic testing system how to interact with a program under test, and how to evaluate output generated by the program under test. For example, a test script can simulate mouse clicks or keyboard interaction with a program under test, and the testing system can inform the test script whether the program under test actually has displayed a particular object on the screen. Output is generated to specify whether objects are correctly displayed or the tests are successful. Examples of commercially available testing systems that can test graphical user interfaces include Mercury Interactive WinRunner, and RobotJ from Rational Software.
However, developing test code or scripts that provide suites of automated tests for complex commercial software may involve extensive engineering time. In particular, the initial development of tests for a software product is resource-intensive, typically involving study of engineering documentation, extensive hand coding of test scripts and assembly of the scripts into test suites. When tests of application programming interfaces (APIs) are needed, extensive study of documentation and hand-written custom test programs may be needed.
After initial test development, significant ongoing effort may be needed as new features are added to a product. All such coding and development is typically manual, and there are no known industry-standard mechanisms to automatically generate test code.
Data driven software has been in use for many years throughout the software industry. In some software development approaches, metadata drives user interfaces and run-time behavior of a software program. Known Web-based network management software products use data-driven mechanisms to dynamically drive the visual representation of the user interfaces. For example, a web-based user interface development and runtime system (“Picasso”) used internally at Cisco Systems, Inc. has been used to generate XML-structured data to define web-based navigation interfaces and security characteristics for commercial software products such as CiscoWorks 2000. The Picasso system consists of a SiteMap tool, which generates general navigation and security control metadata; a Content Area tool, which assists in developing application-specific logic; and a User Interface Infrastruction (“UII”) runtime system, which interprets SiteMap and Content Area metadata files at runtime, and generates the visualization of web-based software products. The XML data defines a hierarchy of pages that are displayed as part of an application. Each page is assigned a unique identifier value and security level.
The UII runtime engine has a bookmarking function that can be used to directly display a particular page or screen of an application program, without traversing a hierarchical tree of pages, by providing a screen identifier of the page in a URL associated with the program.
Based on the foregoing, there is a clear need in this field for an improved way to automatically generate tests for a software program based on metadata relating to the program.
There is also a need for an improved way to automatically generate user interface tests for a software program based on an existing description of the user interface. It would be particularly useful to have an approach in which metadata defining a user interface is re-used for purposes of defining tests.