Stratified random sampling is well based theoretically in the Central Limit Theorem and has been used extensively in many different environments including scientific research, pharmacology studies, government, census and marketing research, and, most closely related to the software industry, as the technique of choice for data processing audits.
Often the problems faced by large IS/IT organizations in establishing the function point baseline for all systems belonging to their corporation appear so formidable that this task is simply avoided. The investment required seems overwhelming to the business managers who are sometimes unaware that the information from such portfolio evaluations is critical to their analytical decision process. Under current venue, one of two options are available for accomplishing the full portfolio count. These are: (1) function point count all systems in the entire portfolio or (2) count all lines of source code according to IEEE standard p1045 and “backfire”. There are numerous factors to consider in either option and for most large organizations (perhaps Fortune 100 and larger corporations and government) the bottom line for the first is “too labor intensive” and/or “too costly” (the former if sized in-house and the latter if sizing is outsourced) and for the second is “too inaccurate to be useful”. End of story for process improvement and accurate, reliable operations analysis reporting.
A second problem occurs for organizations that need to develop estimates early in a project life cycle, typically a count or function point estimate provides the functional size measure to feed the estimating process. The time frame needed is short, decisions need to be made quickly regarding alternatives and scheduling. For large projects, FP counting an entire system can take days, weeks, or in extreme cases, months. The organization often falls back upon expert opinion based estimates, with no way to check the reasonability of the result. Non-optimum plans are made, with disastrous results in some instances.
Of all the metrics needed for fair analysis of software engineering processes and related productivity and quality evaluations, size is one of the most essential. Without accurate sizing of systems engineering output, any business analysis (which often uses surrogates for size that could be inappropriate) can have the potential for misleading and even damaging results.
The requirement to include operations measures for successful IS/IT business contribution analysis has been well documented.
There is, therefore, a current unmet need today in almost all large IS/IT organizations when operational measurement and especially measures of product function size are unavailable or invalid.
There is, therefore, a need in the art for a system and method for estimating software function sizes.