The present invention relates to software development, and more specifically, to systems and methods for logging and profiling content space data and self-reporting for coverage metrics into a software product for use at run-time of the software product.
As part of developing products and applications, particularly software products and applications, requirements are determined, usually from a wide variety of sources, such as stake-holders, strategy people, customers, marketing, industry trends, standards organizations, and more. Through various channels, a detailed technical plan of activities for the software development team is derived from the requirements, which can interact in complex ways. The process of generating detailed implementation plans from requirements is subject to errors from various sources. Multiple concurrent dialogs among teams, making assumptions and decisions in parallel, can propagate errors, which can become built into the project plans and the product architecture and or designs. As such, business results such as time to market, development cost, product viability to compete in the marketplace, and the like can affected.
Use cases (user stories) have long been implemented to organize and itemize requirements for software products or application software. Use cases bridge the gap between business and market knowledge, and system design, by focusing on the user interactions with the system. The breakdown of requirements into use case or line items frequently occurs in parallel with, and is in dialog with, the architects and design leaders. During the time frame that requirements are collected and analyzed, the requirements are subject to change which must be reflected in the line items or use cases. In addition, new requirements are brought up and must be analyzed and fit with the existing line items or use cases. Some requirements are eliminated, with corresponding impacts on line items or use cases.
User stories can be a basis for development sprint planning and status tracking, and the basis for a key functional verification test (FVT) quality metric call ‘content coverage’. Several usage scenarios can be presented in user story form. For example, product owners are interested in what user stories are being executed the most in the field, so that they can focus usability enhancements to those stories. Development technical leaders are interested in what user stories are taking the most time in the field (i.e., time per story multiplied by execution frequency), so that they can focus performance improvements on those stories and deliver measurable dollar-value to customers. Project managers are interested in what user stories are encountering defects the most in the field (i.e., defects per story multiplied by execution frequency), so that they can focus quality improvements on those stories. Test leads are interested in what user stories are getting good coverage in the field, so they can prioritize my testing on the less used stories. Lead IT are interested in what user stories are being used most, and how long they are cumulatively taking, so that they can monitor how the time of the IT Staff is being spent. FVT leads are interested in the content coverage metric with splits by platform, by interface and so on, that is reliable and highly automated so my test engineers don't have to spend time collecting the data for dashboards.