Two distinct milestones of any software development lifecycle are requirements gathering and acceptance testing, when a software product is validated against its requirements. This validation is one of the most difficult tasks, because it involves bridging an abstraction gap between high-level descriptions of requirements and their low-level implementations in source code. Unfortunately, linking acceptance tests to requirements is an exceedingly difficult, manual, laborious and time-consuming task.
At least two dimensions make it important to determine what requirements have been tested. First, is an economic dimension. If a purpose of testing is to find bugs and there is no evidence that some requirements have been tested, what confidence can stakeholders of a software product have in the software product? Equally important is the legislative dimension, where different laws dictate that evidence should be provided on how different requirements are tested, or how diverse artifacts that are related to requirements and tests are traced to one another. Some of these laws are recent (e.g., Health Insurance Portability and Accountability Act (HIPAA) and the Sarbanes-Oxley Act), while the others are standards that have been around for decades (e.g., US Government Department of Defense (DoD) standard on Trusted Computer System Evaluation Criteria (TCSEC)).
For example, many companies that build software products for DoD must comply with the level A of TCSEC that requires proof of verified design, where functionality of the products matches their requirements. The complexity of tracing acceptance tests to requirements and other artifacts (e.g., use-cases, sequence diagrams, state chart diagrams, source code and test cases) may make it difficult for stakeholders to simultaneously comply with economic and legal demands for traceability.