This section provides background information related to the present disclosure which is not necessarily prior art.
Software design models are abstract forms of a solution that can be used to analyze design choices and to partially generate implementations. In many cases, inconsistencies arise between a design model and its implementation when implementations evolve independently of the model from which it was generated. Checking conformance between a design model and its implementation is extremely important if the models are to continue to serve as contracts and blueprints for the implementation as it evolves. Conformance checking can assist in the understanding of a program implementation, strengthen the communication between designers and programmers, and extend the utility of models beyond initial generation of programs. In some domains, such as embedded software systems, mature conformance checking technologies exist because of the close relationship between a modeling language and an implementation language. For example, Reactis can automatically check whether a C program conforms to a Simulink model.
Object-oriented software development has become a dominant methodology in software development. The Unified Modeling Language (UML), as a standard modeling language, is a popular language for expressing design models. In particular, UML class diagrams are widely used to describe software designs. On the programming side, Java has been extensively used as an object-oriented implementation language. Many model-driven engineering (MDE) tools can automatically generate Java skeletal programs from class diagrams to expedite the software development process. Developers often need to manually add method implementations to generated skeletal programs. Unfortunately, the completed implementation may not be consistent with the original class diagram. In addition to simple programmer errors, programmer misunderstanding of the generated structure in a skeletal program can lead to inconsistencies; the programmer would be implementing a software system based on a faulty interpretation of the generated code. Consequently, conformance checking that determines whether properties and constraints specified in a design model hold in the final implementation is needed.
The inclusion of constraints in a design model has also become an indispensable step toward building a high quality software system. Although class diagrams are well designed to describe the structural relationships between objects, they are limited in describing logical relationships that must be maintained and many constraints cannot be diagrammatically expressed in class diagrams. The class diagram itself cannot alone express all applicable constraints and object query expressions that may come into play. As a result, the UML metamodel now contains numerous constraints, or well-formedness rules, expressed in another language, the Object Constraint Language (OCL). The Object Constraint Language is a declarative language for describing rules that apply to software development models, such as UML models. It is used most commonly used to provide constraint and object query expressions, where the model diagrams alone will not suffice.
Program testing has been widely studied in the past decades and advances have been made recently. However, traditional program testing suffers two major obstacles with respect to conformance checking. First, most testing techniques, including symbolic execution techniques, do not consider pre- and post-conditions of a program under test, and they assume that the execution of a faulty statement can expose a software fault. Thus, most testing techniques adopt different coverage criteria to cover all statements including the faulty statements. Unfortunately, many errors in a program cannot be revealed based on this assumption. If a program does not have an asserting statement, then it is possible not to reveal an error even when a faulty statement is reached. By way of example, in a method that calculates the number of zeros in an array, if the programmer forgot to check the first element of the array in a FOR loop (instead of for(int i=0; . . . ), the programmer wrote for(int i=1; . . . )), no error would be revealed when the faulty i=1 statement is executed in a test case.
Second, most testing techniques flip the condition branches during the execution in order to reach different statements. However, in model-driven engineering, some advanced forward engineering tools translate a class diagram into a program that has auxiliary information. For instance, on the Eclipse Modeling Framework (EMF), the attribute eContainerFeatureID is an integer used to identify a container and specify whether it is a navigable feature or not by assigning a positive (navigable) or negative value. If the value of eContainerFeatureID is altered to cover a different execution path, as done by most testing techniques, a false positive that is not a real error can be reported.