In many implementations, the output of a computer operation or process takes the form of an XML (Extensible Markup Language) document. For example, in the software testing arena, it is common for the results of a set of tests to be provided as an XML document. An example of such a document is as follows:
<Testcase><TestID> 1 </TestID><Status> Passed </Status></Testcase>...<Testcase><TestID> n </TestID><Status> Failed </Status></Testcase>
In this document, the results of the tests are described and delimited by XML tags (e.g. Testcase, TestID, and Status). Based on the tags, it can be ascertained that there were a plurality of test cases, and that the test case with test ID “1” passed and the test case with test ID “n” failed.
In a typical testing scenario, a set of tests is run not on just one platform but on a plurality of different platforms. For each platform, one of the above documents is typically generated, setting forth the test cases that were run on that platform and the results of each test case. As a result, after a set of tests is run on a plurality of platforms, a plurality of the above documents is usually generated.
Often, it is desirable to compare the test results across the plurality of documents to derive an overview of the test results. Such an overview can provide valuable information. For example, if a particular test case is failing across all platforms, then it may indicate that there is something wrong with the test case itself. This and other useful information can be derived from the overview. Unfortunately, comparing results across multiple documents is not easy; hence, this overview is difficult to derive.
One method that has been used to try to derive the overview is to run a “diff” utility on the plurality of documents. A diff utility performs a literal, line-by-line comparison between two documents and provides as output a list of all of the differences between the documents. Thus, with a diff utility, it is possible to determine what is different between two documents. However, in many instances, the output of the diff utility is not very useful. One reason for this is that the test cases are often performed in different order on different platforms. As a result, in one document, the information pertaining to a particular test case may reside in one location of the document whereas in another document, the information pertaining to that same test case may reside in another location of the document. Since a diff utility does a literal, line-by-line comparison, it will specify that the two documents are different, even if the test results for the same test case in both documents are the same. This is clearly not the desired result. This is just one example of how a diff utility fails to generate the desired results. A diff utility has many other shortcomings that prevent it from being able to generate a useful overview of the test results.
Currently, there is no available mechanism that effectively compares content in multiple documents. As a result, the task of deriving the overview is performed manually, if at all. In some instances, the number of test cases can reach into the hundreds or even thousands, and the number of platforms can be as high as forty. Given such numbers, the task of manually generating the overview can be a very daunting and time-consuming one.