Testing is an important part of the lifecycle of software. Software systems belonging to organizations require frequent testing to be performed in order to validate the systems' performance. This is especially true when new software modules are installed and/or when software modules undergo changes such as when they are updated or customized.
Software systems belonging to different organizations are typically configured differently because each organization has specific business processes it utilizes. Although the business processes may not be the same, they are made of similar building blocks (e.g., similar transactions may be used by different organizations). Thus, despite organization-specific differences in software systems of different organizations, it is often the case that testers belonging to the different organizations end up running the same, or quite similar, tests on their respective systems.
Despite the fact that different organizations often end up utilizing similar tests, it is often the case that each organization develops its own testing suite. Thus, knowledge such as best practices, and/or known system vulnerabilities, need to be rediscovered by each organization. The cumulative testing experience that may be aggregated from different organizations, the (testing) crowd wisdom, so to say, is simply not shared. However, even if organizations were to share their testing data, it still may be problematic to select and/or generate appropriate tests from the large body of testing data that may be available. Given that there may be many different tests for many software modules, it is often far from trivial to be able to effectively leverage testing data collected from different organizations for a certain user or a certain organization.