Many automated systems and techniques exist for stimulus/response testing of various different types of devices (so called units under test (UUT)). Examples of such units under test include printed circuit-boards, line cards, network switches, switch-routers, routers, router-switches and storage network devices, computer networks, and other electronic systems and devices. In general, such testing involves supplying test input stimuli, e.g., commands, data, etc., to the unit under test, observing actual outputs e.g., actions, data, etc., generated by the unit under test in response to the stimuli, and possibly comparing the actual outputs to outputs that are expected if the unit under test is functioning properly. Depending upon the degree to which the actual outputs match those that are expected, the testing system can indicate either that the unit under test has passed the test, has failed the test, or is operating in an undetermined state.
In many such systems, the underlying operation of the unit under test is controlled by device software such as an operating system. For example, many line cards, network switches, switch-routers, routers, router-switches and storage network devices produced by Cisco Systems, Inc., operate using Cisco IOS software. Cisco IOS software is system software that provides common functionality, scalability, and security for a variety of devices, and allows centralized, integrated, and automated installation and management of internetworks while ensuring support for a wide variety of protocols, media, services, and platforms. Testing devices that run the such software (so called “programs under test”) often requires sending the devices numerous commands as if the device was being used in normal operation. Such testing is often automated, or at least partially automated using a test script handler.
In the simplest example, a test script handler executes a test script and provides the stimuli generated by the test script, e.g., one or more commands, to the unit under test. More sophisticated test script handlers can be configured to: control a variety of units under test; execute and/or pass commands from a variety of different test scripts; schedule testing operations; and report results of various test operations. FIG. 1 illustrates a prior art script based automated test system. Test system 100 includes various computer systems such as server 105 and workstations 110 and 115 which are used to develop and/or host tests for various UUTs 160, 170, and 180. For example, a test engineer might use workstation 110 to develop and host a test script designed to test various features of software executing on UUT 160. Computer systems 105, 110, and 115 are typically coupled to a test server 130 through a network such as a LAN or WAN (as shown) or directly, so that they can host tests or at least provide test scripts to test server 130 and receive test information back from test server 130.
As shown, test server 130 includes a test script handler 135 that acts as a middle agent between test hosts and the units under test. In some cases, test script handler 135 merely acts to pass test commands from computer systems 105, 110, and 115 to the units under test, and returns corresponding information about test results. In other implementations, test script handler 135 can receive test scripts (140) from various test hosts and handle the execution of those test scripts on one or more units under test. Thus, test script handler 135 generally acts to direct the traffic between UUTs and test hosts. Test script handler 135 can also coordinate the loading, initialization, and activation of software (145) to be executed on the various UUTs. Such software can be thought of as the programs under test on the UUTs. The commands from the various test scripts are designed to use the programs under test in order to fully evaluate the functionality of the UUTs, as is well known in the art. Test script handler 135 can utilize other information such as test reports 150, initialization information for the UUT software (not shown) and the like. Thus, test system 100 allows a variety of different potential users the ability to test various different devices in a controlled, efficient manner.
Since test scripts are typically designed only to provide a series of commands to the unit under test, neither the test script handlers nor the test scripts themselves implement sophisticated debugging software, such as debuggers. Debuggers are designed to assist software and test engineers in the process of tracking down errors in the operation of software and/or hardware under test. While there are many tools available to debug a program under test or the test script, there are none designed to invoke a debugger when a bug-free test script triggers a failure in a unit under test's program and to thereby allow a developer to perform familiar run-time program debugging.
In addition, many different types of scripts can target the same unit under test. For example, in a multi-host testing system, some scripts may be written in, for example, Perl, Tcl/Tk, or Expect operating on a Unix platform, while other scripts are written in other script languages in other platforms such as Windows. The program under test itself may be written in different languages such as C, C++, etc, and the unit under test may be using one or more of a variety of different processor architectures, e.g., MIPS, ARM, Intel x86, PowerPC, etc. Under such circumstances, generic source code level program debugging mechanisms would be useful for scalability to serve a large range of development and testing platforms.
Consequently, there is a need for a mechanism to facilitate application program source code debugging when application failures are triggered by test scripts.