Over the 50 year history of discrete event simulation, the growth in applications has been facilitated by some key advances in modeling that have simplified the process of building, running, analyzing and viewing models. Three important advances have been: (i) the modeling paradigm shift from an event to a process orientation; (ii) the shift from programming to graphical modeling; and (iii) the emergence of 2D/3D animation for analyzing and viewing model execution. These key advances were made 25 years ago and provided the foundation for the set of modeling tools in wide use today.
The past 25 years has been a period of evolutionary improvements with few significant advances in the core approach to modeling. The currently available tools are mostly refined versions of what existed 25 years ago.
Many popular programming languages such as C++, C#, and Java are built around the basic principles of object oriented programming (OOP). In this programming, paradigm software is constructed as a collection of cooperating objects that are instantiated from classes. When instantiating an object into a model, one should start by specifying the properties governing the behavior of that object. For example, the properties for a machine might include its setup, processing, and teardown time, along with a bill of materials and the operator(s) required during setup. The creator of an object decides on the number and the meaning of its properties.
The typical instantiation of classes uses the core principles of abstraction, encapsulation, polymorphism, inheritance, and composition.
Abstraction can be summarized as focusing on the essential. The basic principle is to make the classes structure as simple as possible.
Encapsulation specifies that only the object can change its state. Encapsulation seals the implementation of the object class from the outside world.
Polymorphism provides a consistent method for messages to trigger object actions. Each object class decides how to respond to a specific message.
Inheritance allows new object classes to be derived from existing object classes, sometimes referred to as the “is-a” relationship. This is also referred to as sub-classing since a more specialized class of an object is being created. Sub-classing typically allows the object behavior to be extended with new logic, and also modified by overriding some of the existing logic.
Composition allows new object classes to be built by combining existing object classes, sometimes referred to as the “has-a” relationship. Objects become building blocks for creating higher level objects.
Within this framework, objects are implemented by coding one or more methods that change the state of an object. Derived objects may override (i.e., replace) methods that are inherited from its parent class, or extend the behavior by adding additional methods.
The roots of these ideas date back to the early 1960's with the Simula 67 simulation modeling tool. That tool was created by Kristen Nygaard and Ole-Johan Dahl (1962) of the Norwegian Computing Center in Oslo to model the behavior of ships. Nygaard and Dahl introduced the basic concepts of creating classes of objects that own their data and behavior, and could be instantiated into other objects. This was the birth of modern object-oriented programming. Because Simula 67 was a programming language and not a graphical modeler, it never developed into a widely used tool.
In the early days of discrete event simulation, the dominant modeling paradigm was the event orientation implemented by tools such as Simscript (Markowitz et al., 1962) and GASP (Pritsker, 1967). In that paradigm, the “system” is viewed as a series of instantaneous events that change the state of the system. The modeler defines the events in the system and models the state changes that take place when those events occur. This approach to modeling, while very flexible and efficient, is also a relatively abstract representation of the system. As a result, many people found modeling with an event orientation to be difficult.
In the 1980's, the process orientation displaced the event orientation as the dominant approach to discrete event simulation. In the process view, one describes the movement of passive entities through the system as a process flow. The process flow is described by a series of process steps (e.g. seize, delay, release) that model the state changes taking place in the system. This approach dates back to the 1960's, with the introduction of GPSS (Gordon, 1960), and provides a more natural way to describe the system. Because of many practical issues with the original GPSS (e.g. an integer clock and slow execution), it did not become the dominant approach until improved versions of GPSS (Henriksen, 1976) along with newer process languages such as SLAM (Pegden/Pritsker, 1979) and SIMAN (Pegden, 1982) became widely used in the 1980's.
During the 1980's and 90's, graphical modeling and animation emerged as key features in simulation modeling tools. Graphical model building simplified the process of building process models while graphical animation dramatically improved the viewing and validation of simulation results. The introduction of Microsoft Windows made it possible to build improved graphical user interfaces and a number of new graphically based tools emerged (e.g. ProModel and Witness).
Another conceptual advance that occurred during this time was the introduction of hierarchical process modeling tools that supported the notion of domain specific, process libraries. The basic concept here is to allow users to create new process steps by combining existing process steps. The widely used Arena modeling system of Pegden/Davis (1992) is a good example of this capability.
Since the wide spread shift to a graphics-based process orientation, there have been refinements and improvements in the tools but no real advances in the underlying framework. The vast majority of discrete event models continue to be built using the same process orientation that has been widely used for the past 25 years.
Although a process orientation has proven to be very effective in practice, an object orientation provides an attractive alternative modeling paradigm that has the potential to be more natural and easier to use. In an object orientation, the system is modeled by describing the objects that make up the system. For example, a factory is modeled by describing the workers, machines, conveyors, robots and other objects that make up the system. The system behavior emerges from the interaction of these objects.
A number of products have been introduced to support an object orientation, to date they have all been simply the direct application of OOP languages to developing objects for use in simulation modeling. These programming-based tools have been largely shunned by practitioners as too complex. And most practitioners continue to stick with the process orientation. It is believed that much of this is due to the fact that while the underlying modeling paradigm might be simpler and less abstract, the specific implementation may be difficult to learn and use (e.g. require programming), or slow in execution. This is no different than the challenges faced by the process orientation unseating the event orientation. Although the first process modeling tool (GPSS) was introduced in 1961, it took 25 years before the process orientation was developed to the point where practitioners were persuaded to make the paradigm shift.
Although simulation has traditionally been applied to the design problem, it can also be used on an operational basis to generate production schedules for the factory floor. When used in this mode, simulation is a Finite Capacity Scheduler (FCS) and provides an alternative to other FCS methods such as optimization algorithms and job-at-a-time sequencers. Simulation-based FCS has a number of important advantages (e.g. speed of execution and flexible scheduling logic) that make it a powerful solution for scheduling applications.
Simulation provides a simple yet flexible method for generating a finite capacity schedule for the factory floor. The basic approach with simulation-based scheduling is to run the factory model using the starting state of the factory and the set of planned orders to be produced. Decision rules are incorporated into the model to make job selection, resource selection, and routing decisions. The simulation constructs a schedule by simulating the flow of work through the facility and making “smart” decisions based on the scheduling rules specified. The simulation results are typically displayed as jobs loaded on interactive Gantt charts that can be further manipulated by the user. There are a large number of rules that can be applied within a simulation model to generate different types of schedules focused on measures such as maximizing throughput, maintaining high utilization on a bottleneck, minimizing changeovers, or meeting specified due dates.
Because of the special requirements imposed by scheduling applications (including the need for specialized decision rules and the need to view results in an interactive Gantt chart form), simulation-based scheduling applications have typically employed specialized simulators specifically designed for this application area. The problem with this approach is that such specialized simulators have built-in, data-driven factory models that cannot be altered or changed to fit the application. In many cases, this built-in model is an overly simplified view of the complexities of the production floor. This one-model-fits-all approach severely limits the range of applications for these tools. Some production processes can be adequately represented by this fixed model, but many others cannot.
There is a continued need for a simulation modeling system that is easy to use, does not require programming skills on the part of the user, and can be tailored to, and used in, a variety of environments and applications.