This invention relates to measurement analysis software, and more particularly to a new and improved measurement analysis repository for software development and maintenance processes including new ways to automate a specific measurement analysis method.
Turning first to an overview of the Process Model Analysis Method, the Process Model is a measurement analysis method that is flexible enough to be used for all organizations that employ knowledge workers. Some of the career fields that knowledge workers are employed include the legal profession, engineering, and software development, etc. The present invention automates the process model analysis method for software development personnel.
FIG. 1 shows the process model, as it is automated by the present invention. The process model contains three components of work analysis: input 1.1, process 1.2, and output 1.3. Input and Output are expressed in terms of measures that gauge the effort and cost exerted in the task and the result of that effort. The input and output measures are combined to produce metrics, which are represented by the performance indicators box designated 1.4 in the model. Performance Indicators are productivity ratios that indicate the rate of product delivery, support and quality. A process is a way in which tasks are completed for a project. Process attribute data provides information about the people, work environment, tools, and techniques used to complete the work product.
The Process model of FIG. 1 operates in the following way: The input 1.1 and output 1.2 of the project are measured. The measures are combined to produce metrics 1.4. The metrics are examined and charted to determine how they compare with other project results in the organization. The project metric charts show the statistical groupings of the organization's projects. An analyst can then determine the upper and lower control limits with metric charts such as the upper and lower control limits graph, shown in FIG. 2. A project that is within the control limits 2.1, 2.2 can be considered an average project. Projects that register above the upper 2.1 control limit are considered above average. Projects that fall below the lower control 2.2 limit are below average project.
The analyst is interested in the projects that register outside the control limits. The analyst compares the attributes of the projects to determine if there is a significant difference in team composition, work environment factors, tools used, or techniques used between the above average and below average projects. If there is, the analyst could recommend that attribute to be standardized for all projects in the organization. When used in the way that is outlined hereinabove, the process model of FIG. 1. identifies the ways in which the quality and productivity of processes can be improved internally.
As stated in the description of the process model of FIG. 1, time and cost comprise the input measures and and function points and defects the output measures of a project. There are two input measures, time and cost, and two output measures, failure defects and function points. Each of these four measures now will be described in detail.
Time is work effort represented in hours, days, weeks, months, or years. Time measures are used to record project delivery, repair, enhancement or support durations. Time is calculated with function point and defect measures to determine project delivery, maintenance efficiency, and improvement efficiency metrics. Metrics will be described in detail presently. Cost is the monetary expenditures of the project. Costs are calculated with output measures to determine cost metrics. Defects occur when a product does not meet specifications. While a defective product is not deliverable, the effort that produced the defect still must be taken into account. Once defects are measured they are combined with input and output ratios to produce quality metrics.
Function points measure the functionality of software from the user's prospective, regardless of the development or maintenance technology involved. As an output measure, function points are the overall size measure for software applications and projects.
There are five components of functionality. They are: internal logical files; external interface files; external inputs; external outputs; external inquiries. An internal logical file is a user identifiable group of logically related data or control information maintained and utilized within the boundary of the application. An external interface file is a user identifiable group of logically related data or control information utilized by the application that is maintained by another application. An external input process data or control information which enters the application's external boundary, and through a unique logical process maintains an internal logical file and initiates or controls processing. An external output processes data or control information that exits the application's boundary. An external inquiry is a unique input/output combination, where an input causes an immediate output and an internal logical file is not maintained.
A function point counter tallies the number of record types and data element types for the files of the application, and the number of file types referenced and data element types for inputs, outputs, and inquiries. A record type is unique record format within an internal logical or external interface file. File types referenced are the number of internal logical files or external interface files read, created, or updated by a component. A data element type is a unique occurrence of data, which is also referred to as a data element variable, or field. Once these are counted, the counter rates the component as low, average, or high according to a defined matrix. A sample of a matrix for external inputs is presented in Table I as follows:
TABLE I ______________________________________ File Types Data Element Types Referenced 1-4 5-15 16+ ______________________________________ 0 or 1 L L A 2 L A H 3+ A H H ______________________________________
Each component group is then weighted according to the sum of its ratings, as shown in Table II:
TABLE II ______________________________________ Component Low Average High ______________________________________ External 3 4 6 Inputs External 4 5 7 Outputs External 3 4 6 Inquiries Internal 7 10 15 Logical Files External 5 7 10 Interface Files ______________________________________
The weighted component totals are then added to produce the unadjusted function point count. The counter then rams the fourteen General System Characteristics, which account for the design characteristics of an application. These characteristics include factors that make the application unique, and ensure that the function point count that reflects the application is precise and not generic. The counter rates each characteristic on a defined scale of 0-5. The characteristics are totalled, multiplied by 0.01 and added to 0.65 to determine the value adjustment factor. The value adjustment factor is multiplied by the unadjusted function point count to determine the final adjusted function point count. The adjusted function point count is the function point measure listed on the output box on the process model, i.e. output 1.3 in FIG. 1.
It was previously noted that the process model works effectively as a measurement analysis method for knowledge workers. Function points are only a measure when analyzing the size for software development and maintenance. For example, if the area being analyzed was a court system, the rules that regulate function point counting can be adapted to work as an appropriate measure reflecting a court's functionality. In particular the components can be changed to case types for a court system, and these case types can be assigned criteria to determine low, average, and high rating. Appropriate general characteristics can be defined to relate to the unique aspects of different courts and then be rated. Similar formulas can than then be used to calculate unadjusted and adjusted "judicial points." This measure can then be considered by work measurement analysts with the other measures in the process model to determine the productivity and quality metrics of the court system.
The role of metrics in the process model now will be considered. Metrics are the result of the combination of the measures listed in the process model. The types of metrics that can be determined from the measures are inventory (i.e. total function points), quality (i.e. number of defects divided by function points), productivity (function points divided by time), and cost (dollars divided by function points). Metrics are the flags for projects performing above average, average, or below average.
Attributes are tracked in the process model and describe the factors that affect the productivity and quality of a project. The attributes comprise the process box of the process model and include teams, work environment, tools, and techniques.
Team information characterizes the composition of application and project teams. Information such as the size of the team and team experience may explain why a project has above average, average, or below average metrics. For example, a team of just the right size that has good job experience and job knowledge would produce above average productivity and quality metrics.
Work Environment information characterizes a team's work surroundings. Ergonomics are an important factor in productivity and quality. For example, poor productivity and quality metrics may result from a project team that is placed in a noisy and cramped environment.
Tools are the utilities used to complete a project. They include both hardware (such as mainframes or personal computers) and software (such as databases and languages) instruments. Poor productivity and quality metrics may be caused by tools that are faulty or out of date.
Techniques are the methods followed while completing a project. For example, a software development and maintenance team may use structured programming for a project. If a team is using an outdated or unnecessary technique, the project's metrics could be below average.
Each of the attribute descriptions hereinabove show, in a very abbreviated manner, how process attributes affect productivity and quality. It is in this part of the process model that improvements are made. The measures and metrics identify and help to classify projects, but the attributes explain how to improve projects. Once an attribute is discovered to be outstanding, it can than be determined if that attribute can be instituted in a project with below average metrics. If the attribute improves the substandard project, steps can be taken to standardize that attribute for all projects in the organization.