Testing is not just about reducing risk, but it is also about increasing control on the software developed. By aligning the testing objectives with the business objectives and by increasing the effectiveness of testing both can be delivered. A challenge with the industry with regards to functional testing tools is using the right testing tool at the right time in a project, which can significantly increase the efficiency of testing by automating processes, increasing communication, promoting best practices and re-use of tests and test data.
For example there are a variety of functional testing tools which are configured with capabilities related to recording or scripting to perform a specific function, and then playback the recording to verify that the functionality that was tested indeed behaves as desired. A disadvantage with such functional testing tools is that unless the recorded/scripted session is identical to the session during the playback, the script fails, even when such errors may occur due to unrelated actions. For example, a function test for a SAP application that could fail while running on a browser because the server is busy and a response is not obtained in time. A further example is an HTML page that has failed to load up the first time, where reloading the page would have typically fixed such an error. Yet a further example is of random message that might pop up when a script is running during playback, of the laptop computer running on low battery, which can cause the script running to fail. These errors typically are not associated with any core functionality of the script or software running on a system, but nonetheless cause the functional script, e.g. a functional test script or a functional debug script etc., to fail.
U.S. Pat. No. 5,475,843 describes a system and method for improved program testing. The system comprises a Computer-based Training system (CBT) having one or more Application Translation Units (ATUs), a Message Engine, and a Script Engine. For one or more target applications of interest, an ATU is provided for processing events specific to that application, thereby trapping events and translating them into abstract messages or “meta-messages” for conveying information about a particular event to the system. The system provides prefabricated building blocks for constructing a high-level model of an application's User Interface (UI). This high-level model serves as a middle ground between test scripts and the application being tested. The knowledge of how a given UI element is controlled or how it can be observed is retained in the model rather than in a test script. Consequently, the test script consists of easy-to-maintain, high-level testing commands only. However a disadvantage being preventing actions to be taken for the same event that appears again in the same test run, because an action for a new event not specified in the script can only be taken after the user has stopped execution and written an action in the script. A further disadvantage is that the system is not configured to record any user actions taken during a playback.
Without a way to improve the method and system to executing software processing tools, the promise of this technology may never be fully achieved.