Large software development projects typically fall into a “classic” development life cycle. Such a “classic” life cycle is described in United States Defense Department document DOD-STD-2167A Defense Systems Software Development, the disclosure of which is incorporated herein by reference. The phases of this conventional development life cycle fall in chronological order, but may be repeated as new performance capabilities are added to the overall software system or as revisions are made to the system. The life cycle phases are as follows:                (1) Systems Analysis: during which the performance goals for a large system consisting of both software and hardware are specified. At this stage, the major components of the system, often called “configuration items” are identified.        (2) Software Requirements Analysis: during which the system capabilities that are to be implemented by software are isolated. Further, each capability is broken down into a plurality of software requirements. Further still, any interfaces with external devices and interfaces between major software configuration items are defined.        (3) Software Preliminary Design: during which high level software components are identified, wherein each component answers a subset of the requirements. Requirement groupings met by software components do not necessarily correspond to requirement groupings that are part of the software capabilities. Further, high level flow of control (called execution control) for the software components is established. In classic software development, execution control has not been specified in the software requirements. However, it would be beneficial for a system engineer to obtain an initial draft of execution control as necessitated by the software requirements independent of software design factors outside the requirement's immediate domain (such as available operating system services, system architectural constraints, and the like) thereby allowing system engineers who are not computer scientists to analyze the software requirements in isolation. Lastly, during the software preliminary design phase, the interfaces between high level software components are defined.        (4) Software Detailed Design: during which high level software components are broken down into program units, which are in turn broken down into subprogram units. The algorithms for the program units are defined in pseudo-code or program design language (PDL), and the interfaces between program units and subprogram units are defined.        (5) Code and Unit Test: during which, at the subprogram unit level, algorithms are committed to executable code and compiled. The compiled programs are then tested in isolation, which requires building processing environments for each unit to simulate the system in which it executes (referred to as test harnesses). Test harnesses will often comprise more lines of code than the unit being tested. Also, as they are tested, the units are progressively joined into components.        (6) Software Component Test: during which processing environments are built around the components that are progressively created from units. After the processing environment is built, the components are tested. The testing often moves from a desktop simulation environment to a systems laboratory where portions of the actual system hardware may be present.        (7) Requirements Test: during which the high level software components that have been constructed are tested with respect to their requirements. This phase is generally conducted in a systems laboratory where the overall system is simulated (or some of the system is simulated in conjunction with actual hardware components).        (8) System Test: during which the fully realized system is tested against its performance requirements.        
As the development of a software system proceeds through these phases, it is often the case that errors introduced at an early phase are not detected until a subsequent phase. It is widely-recognized that such software errors are a chronic and costly problem because the cost of correcting software errors multiplies during subsequent development phases. This is a particular problem with software requirements that are created and tested during phase 2. Conventionally, software requirement quality checks are achieved through a “balancing” of interface models to obtain interface/data element integrity and through peer review of the text of software requirements as well as peer review of other modeling artifacts. However, the inventors' experience has shown that logical errors in software requirements can easily escape these processes. As such, these errors go undetected until subsequent (and costly) phases have completed, thereby requiring designers to essentially “go back to the drawing board”.
Therefore, a need in the art exists for an integrated tool that provides users with the ability, at an early phase of the development life cycle, to successfully evaluate whether a software requirement (or a set of software requirements) for a software capability are properly defined. Furthermore, such a tool should preferably perform such an evaluation during the Software Requirements Analysis phase. Further still, such a tool should exhibit a high degree of user-friendliness to not only increase the efficiency of persons using the tool but to also make the tool available to wider range of people with a wider range of skillsets.