Software applications and products are typically subjected to various test cases prior to and in the early stages of development and deployment. Software testing involves the execution of a software component or system component to evaluate one or more properties (e.g., metrics) of interest. In general, these properties indicate the extent to which the component or system under test meets the requirements that guided its design and development, and responds correctly to all kinds of inputs. These properties are also useful to evaluate whether the component or system under test performs its functions within an acceptable time, is sufficiently usable, can be installed and run in its intended environments, and achieves the general result its stakeholders desire.
The software code for an application is generally stored in a code repository. When a change is made to a portion of the software code, it is “checked-in” or “committed” to the repository. Upon each committal, a test plan may execute one or more test cases with the intent of finding software bugs (errors or other defects), and verifying that the software application is fit for use. For example, during bug fixing or feature enhancement, a fairly typical design practice involves running (or re-running) all of the test cases in a test plan for the software application to help validate (or re-validate) the application and make sure the new code changes do not “break” any existing functionality.
Different types of tests can be executed to identify software bugs (errors or other defects), and verifying that the software application is fit for use: installation testing, compatibility testing, smoke and sanity testing, acceptance testing, and regression testing. Regression testing verifies that software which was previously developed and tested still performs the same way after it was changed or interfaced with other software. Changes may include software enhancements, patches, configuration changes, etc. During regression testing, new software bugs or regressions may be uncovered. Sometimes a software change impact analysis is performed to determine what areas could be affected by the proposed changes. These areas may include functional and non-functional areas of the system. The purpose of regression testing is to ensure that changes such as those mentioned above have not introduced new faults. One of the main reasons for regression testing is to determine whether a change in one part of the software affects other parts of the software.
Common methods of regression testing include re-running previously completed tests and checking whether program behavior has changed and whether previously fixed faults have re-emerged. Regression testing can be performed to test a system efficiently by systematically selecting the appropriate minimum set of tests needed to adequately cover a particular change. However, existing techniques for regression testing typically take a long time to execute, thus prolonging software development test cycles. Moreover, existing techniques are unable to efficiently and effectively identify regression incidents when several revisions are made to the source code of the software application in a short duration of time and/or in quick succession.
The techniques introduced here may be better understood by referring to the following Detailed Description in conjunction with the accompanying drawings, in which like reference numerals, indicate identical or functionally similar elements.