Emulating customer data for the purpose of data testing is a common practice. The idea is to create a load in test such that an operational test “profile” is similar to the profile of the actual customer data. The way that customer and test load data are typically compared is that many parameters are evaluated over both profiles and the differences are compared. However, as illustrated below, the establishment of an accurate operational test profile and an efficient comparison of the customer and test profiles are not easily accomplished by existing methods in the art.
Some test teams profile a customer's workload by generating a set of queries against existing system management data produced by the system to help determine the desired features for workload optimization. The idea is to create a load in test such that the operational profile is similar to that of the customer. A comparison is then performed, evaluating many parameters over both profiles and compiling the differences. However, because there are cases when thousands of parameters and attributes are compared, it is difficult to separate important data from unimportant data. In addition, these models are implementation-specific. This causes the coverage tooling, including data collection and data presentation, to be model-specific as well.
For example, using an IBM® System z test environment, test teams can employ a method known as workload profiling. This method retrieves empirical system data produced by the z/OS operating system and performs a logical and statistical comparison between data from the customer's environment and data from the test environment. This comparison shows gaps in stress levels against certain functions within the system and the overall system in general. It also shows configuration options chosen by customers and a certain amount of functional flow of control.
Another existing method to compare customer and test data is to perform a comparative functional coverage analysis, to see if the results are similar. Functional coverage is a coverage methodology which can evaluate the completeness of testing against application-specific coverage models. For example, the model could be to “check that every <unit> sends every possible <signal> to every other <unit>.” However, these models may produce too much data to easily filter important data from unimportant data.
Techniques currently exist for condensing coverage data through the use of “hole analysis.” Hole analysis enables the discovery and reporting of large uncovered spaces for cross-product functional coverage models. Thus, hole analysis techniques allow a coverage tool to provide shorter and more meaningful coverage reports to a user. Hole analysis techniques, however, have not been applied within test applications to produce comparisons between test and customer data, and similarly have not been applied within functional coverage tools.
What is needed in the art are enhanced operations for condensing coverage data, specifically customer data and test data used to simulate a customer profile. The present disclosure introduces such enhanced operations through the application of appropriate comparative functional coverage sort algorithms and comparative functional coverage hole analysis upon sets of actual data and test data.