Automated testing and analysis of complex source code and related development tools using automation scripts is crucial to understanding the functional impact of source code changes across a wide variety of systems in a timely and efficient manner. Identifying the coverage of such automation scripts for a particular code release is typically based on the knowledge and experience of the developers and programmers assigned to the project. In order to leverage this knowledge and experience, it generally takes significant effort to perform certain tasks—including identification of related or dependent code modules that are affected when a code change is made to other areas of the project. Further, such effort does not guarantee reduction of the risks of later incidents or errors once the software goes into production because often the concept of application code level coverage and dependencies is overlooked.
In addition, the understanding of source code requirements and their relationship to the automation scripts is a time-intensive process. Although, various tools exist for generating source code requirements, these requirements are typically mapped manually to the test automation scripts—again based on the knowledge and experience of developers and programmers. Therefore, it is challenging to determine the technical coverage of the test cases pulled for that release. Often, a person must manually add/update/move applicable test scripts for each build or release every time if these test scripts are running in continuous integration and deployment (CI/CD) environments. This manual process results in significant hurdles where there are multiple source code builds per day and when the development team wants to run scripts applicable to each build. Currently, there are no tools to quantify the selection of impacted test scripts while automatically considering the specific technical details of the source code and its dependencies.