The invention is directed to an improved approach for testing and verifying applications under test with user interfaces. Most computing devices, applications, and complex tools rely upon a user interface to interact with, receive input from, and provide information to users. There are many types of user interfaces. Common approaches to implement user interfaces include the graphical user interface (GUI), character user interface (CUI), and web-based user interfaces.
Like any other development process for a complex design, it is important to ensure that the process for developing a user interface involves adequate testing and verification of the performance and functionality of the interface components. In the field of computer science, GUI software testing is the process of testing a product that uses a Graphical User Interface (GUI), to make sure it meets its written specifications. This is normally done through the use of a variety of test cases, in addition to ad-hoc methods involving human interaction.
There are two conventional approaches which battle the synchronization issues. The first is a sleep statement approach which inserts a delay, by using a sleep statement, in the test case. The idea behind inserting the delay is to give the application some time to complete execution of the action it is performing. Abstractly speaking, the test is waiting for some time with the “hope” that some condition will be fulfilled by then. The other is a WaitFor statement approach. A waitFor statement is also a form of a delay but it is not simply a time based delay but it waits for the state of the GUI to change. The tester or verification engineer may need to write some logic to identify the particular state that the test must wait for in order to fulfill the synchronization requirements of the component-operation.
In a typical GUI verification or testing, a tester or verification engineer often adopts the record-replay mechanism which records all the actions in the sequence that the tester performs and generates a script that may be subsequently replayed. Some record-replay tools employ the approach of automatically inserting sleep statements in the generated scripts to record the time-lag between two successive user operations. By using the sleep statements, the test tool is trying to encapsulate the state change of the GUI into a certain amount of time. The event recorder of the tool then inserts sleep statements into the script. These sleep statements make the script sleep for a specified amount of time.
The test tool then tries to ensure that the script is replayed in the same speed as it was recorded by the tester. So if the tester, while recording, waited for e.g. a window to pop up, the script will do the same. Nonetheless, this approach often tends to overkill the amount of sleep time in order to eliminate failures due to insufficient amount of waiting time. In other words, these tests tend to wait for longer periods of time than necessary. Moreover, this approach is often unstable because the specified amount of sleep time may be insufficient when slower testing systems are used for these tests. In addition, this approach may be unreliable by passing certain operations that should have failed or by failing certain operations that should have passed because there is no mechanism to check to see whether the previous operation has indeed completed.
The WaitFor approach instructs the test case to wait for a certain GUI component to change state or for some condition to be fulfilled rather than waiting for a certain period of time. In this approach, the tester is required to identify the component-operation that signifies the completion of the task and manually insert these WaitFor statements in the test case at the required locations. The test will then wait for whatever condition is provided with the statement to fulfill, before moving on to the next step in the test. These WaitFor statements could either wait for a specific GUI component to exist and be accessible or wait until a specific condition becomes true.
Nonetheless, the WaitFor statement in the above test case must be inserted manually as it requires identification of the state to wait for. In other words, the same WaitFor statement will need to be manually inserted in all places that require the test case to wait for the change of state or fulfillment of condition. This approach is thus impractical and prone to error due to the fact that there may be a large number of scenarios to be tested. This approach also requires the tester or the verification engineer to possess the knowledge of how and where to make the necessary changes. This approach also consumes a lot of computing resources because the same logic in the WaitFor statements will be duplicated many times in a test case. This approach also causes maintainability issues when there are changes in the logic for the WaitFor statements because such changes need to be replicated in all required places in a test case. In addition, this approach completely fails in cases of monkey testing or random scenario generation where the test tool is required to perform actions randomly because the randomly generated tests will not include such WaitFor statements.