There currently are many commercially available tools and test frameworks for testing web applications within a software development process. One of the most challenging tasks during test automation using these frameworks is the synchronization between a test sequence and the system under test (SUT). It typically is desirable to ensure that automated tests run well under various conditions that impact the runtime and overall performance of the system under test. Parameters affecting these conditions include, for example, hardware resources of the SUT, hardware resources of the client (e.g., the browser system), network traffic, parallel usage of the hosts involved, type and version of the browser used for automation (e.g., Internet Explorer, Firefox, etc.), and/or the like.
Because a test automation framework should retrieve reliable results under conditions affected by these and/or other parameters, it is desirable to ensure that the test execution can handle different scenarios regarding available resources of the SUT. For instance, a test framework may in some test scenarios need to be sure to check an expected result at a time that is neither before a server has delivered a response, nor before a client has finished processing and rendering the associated content.
When testing simple web applications without AJAX (Asynchronous JavaScript and XML), for example, the automation software can wait for a specific event of the browser. This event is raised by the browser when the response has been delivered by the server and the browser has finished rendering. At this time, the test can resume checking results for expected values. An event in this instance refers generally to a software-originated or other message that indicates that something has happened such as, for example, a keystroke, a mouse click, a webpage that has finished loading, etc.
Yet if the web application uses AJAX, this mechanism is not applicable. It therefore can be very difficult to determine the correct and reliable point in time for resuming test execution.
One advantage of AJAX applications over simple web applications relates to the ability to reload and render just a portion of a page for user interactions, as opposed to needing to reload and render an entire page. The communication between browser and server is done asynchronously. The browser will not raise a “ready” event to signal rendering has been finished. One benefit of this approach relates to potentially massively decreased network traffic using AJAX for web applications.
The use of AJAX can becomes an issue, as it currently requires a static time for waiting for user interactions to complete. Because there is no event to listen to, the time to wait has to be long enough to ensure actions have been finished and that the SUT is ready to resume. Thus, running functional tests in a huge and complex test suite for example can involve the test automation waiting at many (sometimes thousands, tens of thousands, or even more) points of execution. These waiting times add up, thereby increasing the time of execution of large test suites. But in software development processes using continuous integration and nightly builds, for example, the test execution time easily can draw near 24 hours. This can lead to conflicts running the test automation for every nightly build.
FIG. 1 is a sequence illustrating this timing issue. As can be seen in FIG. 1, the critical time slot #1 spans a timeframe between user interaction (passed from the framework to the browser) and the appearance of the progress indicator (based on information transmitted from the browser to the framework). It will be appreciated that if there is no progress indicator displayed, the critical time slots #1 and #2 will be merged into one longer critical time slot between the user interaction and the appearance of the new rendered object.
In this example, the critical time slot #1 can take from a few milliseconds up to several seconds. The amount of time may depend on parameters such as, for example, the browser version, the client operating system, the hardware resources, the load of the machine, etc. Furthermore, as will be appreciated, there can be a high fluctuation of performance during test execution of a whole test suite.
Critical time slot #2 spans a timeframe between disappearance of the loading image and the displaying of the new rendered AJAX object. This time slot also can differ in its dimension, e.g., based on the above-described and/or other parameters.
During these time slots, the automation framework cannot determine whether the processing of the user interaction has already been finished, or has not yet started. That said, critical time slot #2 can be handled by waiting for a specific GUI object or state change of an object. This feature is supported by most frameworks, e.g., through the observance of the expected object being rendered by the browser
However, critical time slot #1 cannot be handled in this way, as it is hard to determine whether the observed object has already been refreshed, or whether it is still being displayed prior to being reloaded. Thus, the automation framework here typically has to use a fixed wait time to help make sure that this time slot is fully covered. For instance, a fixed wait time can be introduced after each user interaction (e.g., mouse click, keystroke, etc.).
It would be desirable to make this fixed wait time long enough to cover situations where there is a high load on the test server. Generally, this amounts to at least 5 to 10 seconds. These wait times in the aggregate have the potential to massively increase test execution time of the whole test suite.
Shortening the wait time would accelerate tests and decrease the time to finish. While this might be faster, it could lead to instability because the synchronization between the browser and the test framework is not reliable. Conversely, choosing longer waiting times reduces instability but extends the time to finish all tests.
Consider, for example, a test suite providing 500 test cases, with about 10 requests to the server per test case (inclusive of pre- and post-conditions) and 7 seconds of wait time after every user interaction for synchronization. This adds up to nearly 10 hours just for the loading and wait time for 5,000 requests. For the majority of these requests, there is no need for such a long waiting time but, unfortunately, it is unpredictable when a longer wait time is required.
As an alternative to adopting fixed wait times, the SUT could be altered, e.g., to send out status information to the test framework. Similarly, it is possible to change the document object model (DOM) at runtime and/or add JavaScript or the like that is executed, e.g., to signal to the test framework to continue with test execution. Still other solutions may deal with the same problem, but do so by making code changes with respect to the SUT (direct or indirect) or indirectly requesting monitoring in parallel to the browse-to-server communication.
Unfortunately, however, these approaches require changes to the code of the SUT (directly or indirectly) during runtime and/or development time. As a result, the software under test will not be the same as the software delivered to customer. These tests therefore do not prove that the same software will run properly in a production environment, for a customer, etc. The further modification of code to include adapters also typically involves a proprietary interface with the request monitoring and test tool.
A further problem during test automation relates to the maintenance of test data. For instance, in scenarios where the web server needs to fetch data from third-party services to process requests from browsers, the maintenance of test data can become a very time consuming issue, e.g., if the data is changed very often, depends on a request time, etc. There are many example services that fall into this category including, for example, services that retrieve current interest rates, hardware sensor data, and/or the like. To handle these situations, the test data (reference values to be compared with displayed information in browser) may be adapted frequently, or the service may be mocked to deliver identical values every time. But these techniques result in the application being tested in manner that does not correspond to a real-world scenario with varying data.
In this regard, when it comes to mocking services to obtain fix data, the service to be called during the test run will be simulated. A service with the same signature as the real service is implemented, and the mock-implementation of the service delivers static data to test against. The delivered data can also be mocked depending upon incoming requests.
Yet if this mocking approach is used, every service that is used to obtain data for displaying in a browser may have to be mocked. Many implementation may have to be written, delivering all different constellations of data with extreme values for the upper and lower bounds, special cases, and so on. But even if all of this work is undertaken, this still may not cover every situation that can be delivered by a real service bound to the system (especially in the case of, for example, services delivering sensor data, real-time rates, etc.).
If the data received by the third-party service is rarely changed, it is possible to adapt the expected results in the test case specification every time the service delivers other data than expected. However, even services that deliver more or less fixed data will sometimes change the values with which they respond. Depending on the frequency, it can be very time consuming to adapt the test data (expected results) stored in the test framework on the test framework side.
Regular expressions (e.g., sequences of characters that define a search pattern) also may be used for validating information displayed in the browser. If the service delivers well-structured, but frequently changed, data, the test framework can use regular expressions or the like to compare information displayed in a GUI with expected results. This way, no real value is checked, but the structure of the data is. But depending on the ability to identify proper regular expressions, there is a potential variable gap in the test coverage. Moreover, real values are not verified. Also, if the structure of the delivered data is changed, the regular expression may need to be adapted in the test framework.
For example, assume that there is a user interaction in a browser that forces the web application to retrieve the current values of temperature sensors and display the results on a webpage. The displayed parameter could have different values (positive, negative), or different measurement units, such as ° K, ° C. or ° F. To match all the values, the test framework will not check whether the correct value is displayed in the browser (e.g., as “−13.7° C.”). However, the structure of the value can be checked using a regular expression such as, for example:
-?\d+[,\.]\d+° [FKC]
If the processing of the sensor value in the web server does not work correctly, that issue cannot be identified by the test, as the regular expression only checks whether the displayed temperature has the correct structure, rather than whether the value displayed on the webpage is the exact same value delivered by the sensor.
Indeed, if the value delivered by sensor service is 78.9° F., but the value processed by the web server is 68.9° F. (and is incorrect because of a defect in the SUT), no error will be generated. That is, 68.0° F. is displayed in the browser but the test automation framework will only check this value using the regular expression, leading the test automation framework to not report an error because there is a valid temperature with a valid structure displayed.
Certain example embodiments address the above and/or other concerns. For instance, certain example embodiments relate to improved techniques for testing and monitoring, e.g., when running automated tests in an AJAX and/or other asynchronous messaging frameworks. Although AJAX and other like frameworks are beneficial in many real-life scenarios (e.g., when an end-user is sitting in front of a browser) where the communication between the web browser and the web server is performed asynchronously, such asynchronous behaviors oftentimes cause problems for automated testing and/or monitoring. Certain example embodiments advantageously help overcome these issues by providing a Resume Trigger Manager and/or associated techniques. The architecture of certain example embodiments has a very low footprint, as the web browser and the test framework are adapted only slightly. In the more general case of monitoring backend servers (which may be third-party servers), the problem of client identification is solved pragmatically.
One aspect of certain example embodiments relates to a Resume Trigger Manager interposed between a browser and a backend web server. An intelligent approach to handling asynchronous messaging advantageously may help to reduce wait times typically involved in accurate test frameworks.
Another aspect of certain example embodiments relates to intercepting or otherwise handling client-server communications, allowing a test framework to have a more accurate figure for the timing of the server call.
Another aspect of certain example embodiments relates to accumulating wait requests at an intermediary system interposed between the browser and one or more web servers (regardless of whether such web servers are third-party web servers) until a certain event is triggered, thereby enabling the bypassing of other servers in the middle and facilitating direct client to third-party server connections. Relying more directly on the functionality of certain third-party servers advantageously allows for monitoring of other intermediate servers and is a potentially low footprint and easy to implement approach to testing and monitoring tools.
Certain example embodiments relate to a test manager system for facilitating testing of a web server and/or a network-provided computer service running on the web server in accordance with a test execution that includes a plurality of test operations. Processing resources include at least one processor and a memory operably coupled thereto. The processing resources are configured to control the test manager system to at least perform the test execution by at least routing service request messages from one or more client computer systems to the web server in accordance with the test execution; maintaining, for each client computer system, a count of open service requests not yet responded to by the web server; relaying return messages from the web server to the one or more client computer systems in accordance with the routed service request messages; and receiving a first wait request from a given client computer system. In response to reception of a first wait request from a given client computer system: a determination is made as to whether there are any open service requests for the given client computer system based at least in part on the count associated with the given client computer system, and a determination is made as to whether a predefined period of time has elapsed; in response to a determination that there are no open service requests for the given client computer system, a first wait response message indicating that there are no open service requests for the given client computer system is returned to the given client computer system; and in response to a determination that the predefined period of time has elapsed, (a) each pending service request for the given client computer system is interrupted, (b) the counter for the given client computer system is reset, and (c) a first wait response message indicating that the predefined period of time has elapsed is returned to the given client computer system.
In certain example embodiments, there is provided a method of testing a web server and/or a network-provided computer service running on the web server in accordance with a test execution that includes a plurality of test operations. The method comprises, at an intermediary computing device including processing resources including at least one processor and a memory: routing service request messages from one or more client computer systems to the web server in accordance with the test execution; maintaining, for each client computer system, a count of open service requests not yet responded to by the web server; relaying return messages from the web server to the one or more client computer systems in accordance with the routed service request messages; and receiving a first wait request from a given client computer system. The method further comprises, in response to reception of the first wait request from the given client computer system: determining whether there are any open service requests for the given client computer system based at least in part on the count associated with the given client computer system, and determining whether a predefined period of time has elapsed; in response to a determination that there are no open service requests for the given client computer system, returning to the given client computer system a first wait response message indicating that there are no open service requests for the given client computer system; and in response to a determination that the predefined period of time has elapsed, (a) interrupting each pending service request for the given client computer system, (b) resetting the counter for the given client computer system, and (c) returning to the given client computer system a first wait response message indicating that the predefined period of time has elapsed.
According to certain example embodiments, at least one test operation in the test execution may involve a call to a third-party web service hosted by a third-party web server and, for the at least one test operation in the test execution that involves the call to the third-party web service, the processing resources of the test manager system may be configured to control the test manager system to at least: receive a second wait request from the client computer system responsible for initiating the respective call to the third-party web service; route, to the third-party web server, a corresponding service request message from the client computer system responsible for initiating the respective call to the third-party web service; relay a response to the routed corresponding service request message to the client computer system responsible for initiating the respective call to the third-party web service; determine whether the corresponding service request message has been handled; and in response to a determination that the corresponding service request message has been handled, return to the client computer system responsible for initiating the respective call to the third-party web service a second wait response message indicating that the corresponding service request message has been handled and including information associated with the response to the routed corresponding service request message relayed to the client computer system responsible for initiating the respective call to the third-party web service. The second wait request may be received by the test manager system before the corresponding service request message is received by the test manager system. The web server may be a web server under direct test; the corresponding service request message from the client computer system responsible for initiating the respective call to the third-party web service may be routed to the third-party web server via the web server under direct test; and the response to the routed corresponding service request message may be relayed to the client computer system responsible for initiating the respective call to the third-party web service via the web server under direct test.
In certain example embodiments, the test execution may require changes to neither the web server, nor the service running thereon.
Corresponding methods and non-transitory computer readable storage mediums tangibly storing instructions for performing such methods also are provided by certain example embodiments, as are corresponding computer programs.
These features, aspects, advantages, and example embodiments may be used separately and/or applied in various combinations to achieve yet further embodiments of this invention.