The present disclosure relates to functional verification testing, and more specifically, to determining an optimal set of tests for a product release (e.g., of software or firmware).
During the lifetime of a software product, developers may release service packs to the software that provides additional functionality. For example, firmware for a microprocessor might periodically receive service packs corresponding to a new release of the firmware. The service packs can include multiple patches providing significant changes to the firmware, such as the addition of major functions. Prior to release, each patch is tested for quality assurance. To do so, a developer may prepare and execute automated test cases to functionally verify areas of development code modified or added by the patch. Practice guidelines for these test cases often require that the test cases cover any affected areas of development code comprehensively. That is, the result of an execution of a bucket of test cases must ensure that each affected area of code is executed. Further, if any test cases fail, the developer goes back to the relevant code to address issues raised by the test cases. Test cases help minimize bugs in the software updates that are subsequently released.
A typical software product may be associated with a number of automated test cases. Further, with each subsequent release of the software product, the number of test cases increases. Consequently, executing an entire bucket of test cases may be less than optimal. For example, a developer may prepare test cases that cover an area of code containing the new features provided by a release patch. In some scenarios, previously created test cases already cover the area of code, thus rendering the new test cases redundant. Executing unnecessary test cases can cause a burden on a testing system as well as developer resources.