Computer-aided Software Engineering tools originated in 1982. This family of techniques has had little evolution outside the mainframe industry. The term CASE systems was very promising on the origins, but at the end it produced little results. During the 90's and early years of XXI century new concepts took traction: Business Process Management (BPM) tools, and Model Driven Architecture (MDA). BPM remains as a good mechanism to support certain business needs, while MDA disappeared with no practical results. At around the year 2000, the new term Model Driven Engineering (MDE) was coined.
The foundational work has evolved mostly in the Academia, with few commercial solutions. The quest of MDE is the development of information systems (we will refer to them as business solutions from now on) in an automated way, using the description of the intended product as the only source of work for its automated generation. The theoretical work produced in relation to MDE is grounded in the canonical definition of the modeling spaces (Atkinson and Khüne, Model-Driven Development: A Metamodeling Foundation, (2003) IEEE SOFTWARE; Stahl, T., & Lter, M. (2006). Model-driven software development: Technology, engineering, management. Chichester, England: John Wiley). Very succinctly, they propose four spaces: objects, models, meta-models (such as UML), and meta-meta-models (such as MOF), related hierarchically with an instantiation operation, plus two additional transformations, one between spaces (Model to Model or M2M) and one to text (Model to Text or M2T). M2T transformations produce the final software product that is defined by the models. There have been two schools of thinking in relation with the idea of generation of business solutions. The most important one (referred to as Indirect Modeling) attempts to ‘automate programmers’, while the other (referred to as Direct Modeling) attempts to ‘execute the models’. Indirect Modeling attempts to generate source code in General Purpose Languages (GPL) such as JAVA and .NET, as if a programmer had developed the code. In summary, they attempt to create technologies that are able to read UML-like specifications and behave like programmers. Direct Modeling attempts to make the model an executable artifact by itself, as-is, without production of code nor intermediate transformations. Direct Modeling has large problems in terms of scalability of solution and its application to general problems.
The critical limitations of the current MDE standard model are: (1) the standard model does not provide a prescription or a technique for the transformation of models into executable business solutions; and (2) models do not support multilevel meta-modeling in a natural way, which is a standard feature of all large models and an imperative requirement for model reusability. When humans look at a complex reality, they tend to apply processes of abstraction at many levels, which creates mappings between the reality, our language and our mental patterns. Without this abstraction capability we are unable to lead with complex realities as we do. For this reason, the second limitation is of great importance.
These two theoretical limitations greatly restrict the development of large model-based infrastructures. The current state-of-the-art makes the erroneous assumption that every model can be transformed in a computable system, by unknown means, and that multilevel meta-models are not a critical feature in every large-scale deployment. The characteristics that should be required for a large implementation of a successful modeling tool are the following:
(1) Scalability: Target realities can be very large, so models have to support very large problems. For instance, in the domain of Enterprise Systems, models can be of the size of large organizations such as banks, insurance companies, manufacturing industries, and others. Thus, the models must be able to emulate every process and aspect of those companies.
(2) Persistence: Each model has to contain all the complexity of the target system in such a way that the evolution of the system is done through modifications in the model and transformations, and there is no relevant information outside of the model. This is equivalent as saying that the model persists through time.
(3) Composition: To make scalable and persistent properties manageable, the capability of decomposing large problems into small manageable ones is imperative. Later, these decomposed models are enacted as an executable solution that solves the problem as a whole.
(4) Automated deployment: If the model is persistent, the translation of the model to an executable form has to be automatic, in negligible time, without the need of any ad-hoc development, manual adjustments, and technical testing. Only functional testing related with the completeness and correctness of the model, including performance testing, should be required.