As it is generally known, software application programs (also referred to as “application programs” or “applications”) are often extremely complex. Additionally, the variance in scope, functionality, depth, and breath between even relatively similar application programs is much higher than that typically found between examples of other kinds of products. As a result, it is difficult to optimize production capacity in software application development, since every software application developer must substantially “re-invent the wheel” for each application developed.
It would be highly desirable, from a productivity and quality assurance perspective, to have an application design, or “model” that is applicable and advantageous across multiple specific software application programs. If an appropriate common software model is not available across multiple applications, it is difficult to assure predictable, consistent, reproducible quality levels in the development process, since the application “production line” does not use the same built-in model in all resulting products. However, if a software model can be used across multiple application programs, a development process can more effectively and conveniently pre-test and pre-optimize certain system-level properties, since those properties fall out of the shared software model, and are common to all the developed applications.
A number of prior systems have attempted to provide a common model for software application programs. The IBM/MF environment is an early example of such efforts, and includes a form processing application model providing conformant applications with a built-in model. The IBM/MF approach is oriented towards record keeping, and authenticates users to determine which forms they are allowed to access. An authenticated user is permitted to fill in encoded input forms embodying the integrity and logic of the underlying system data structures. The user submits a form in its totality once he or she decides the form is ready for processing. The application then processes the completed form and either rejects it and sends it back for edits, or stores it in a database in which each form equals one stored database record.
Systems such as IBM/MF arose during what may be considered a software application “industrial era”, in which all applications operated using a transaction engine built and optimized for processing forms. A specific example of such a system is one running VM/IMS or MVS/CICS/DB2, operating on an IBM/MF System 360. Using these systems, programmers do not have to “re-invent the wheel”, or necessarily understand how the underlying system works, in order to do develop effective applications. While the scope of the applications' that can be developed for these environments is relatively limited, application developer productivity, and resulting application quality, are relatively high.
More recently, in what may be referred to as the “client-server era”, no similarly applicable, common application model has arisen. Instead, many variants of “Remote Procedure Calls” (RPCs) are often used to allow applications to spread their execution across different computer systems. Advantageously, the wide range of applications that can be developed using a client-server approach covers a wide range of data processing needs. However, the fact that these client-server applications cover what is effectively an open-ended range of data processing needs gives rise to a number of difficulties. In particular, there has been no correlation between the user definitions in the database server and user definitions in the applications using the databases on the database server. Typically, the only predictable component in such systems has been a single centralized database server used to store the shared outcome of all transactions.
As a result of the increased complexities presented by the wide scope of applications that can be supported in a client-server environment, the applications themselves have been effectively unmanaged. Application developers generally have great freedom with regard to how they design client-server applications, and no built-in model has been used to bound ‘client’ program portions or ‘server’ program portions to specific roles and responsibilities. Programmers individually may decide where and when the ‘client’ program portion ends and the ‘server’ program portion begins. As a result, what is referred to as the “business logic” of each application is split at some indeterminate point between the client and server portions. Multiple, different points-of-failure result from the fact that there is no unified governing structure for these applications. Moreover, any underlying engine for these applications cannot predict the model used for any given application. As a result, mission-critical or industrial-strength systems are often still IBM/MF applications, despite associated costs and maintenance risks of using this technology.
While the HTTP (HyperText Transport Protocol) provides a clear page-based processing model for browsing distributed content, it is not a sufficient application model for contemporary applications. As it is generally known, when using HTTP, a user requests a specific HTML (HyperText Mark-Up Language) page, and an HTTP server responds by sending the requested page. An HTTP browser is a bounded client application in that it includes a built-in rendering engine, is capable of sending HTTP requests, and renders the response in a way that allows the receiving user to browse it. An HTTP server is some sense a bounded application, in that it is capable of processing an HTTP request, and returning an HTML page to the HTTP client. Additionally, user navigation actions are consistently encoded using a concise notation of URLs (Uniform Resource Locators).
Despite the advantages of using an HTTP system to provide distributed applications over the Web, developers for different applications based on HTTP do not have the advantages of a shared, built-in application model for their applications. HTTP simply doesn't go beyond effectively enabling requests to be routed over the Web to application servers, and thus virtualize access to programs executing on application servers. Related technologies such as EJB (Enterprise Java Beans), COM (Common Object Model), and NET (Microsoft®'s framework for Web services and component software), provide some features, such as an instruction-set and tools for constructing program components, but fundamentally provide only a syntactical envelope for wrapping interactions among components. These technologies do not provide binding semantic interfaces, nor do they establish any transactional structure for interconnection.
In sum, within the execution context of client-server applications, application servers provide a component model, but not an application model. Client-server application program developers have great freedom to decide how specific components within the applications, such as client and server components, operate. For example, developers of client-server applications may freely determine whether a given component accesses a database directly or not, or calls another component directly or not. Along these same lines, client-server application developers have to establish their own, application specific techniques for inter-component interactions, and handling of any resulting transactional side-effects. Moreover, client-server application developers must develop application specific means for managing a stable and reliable application state across components and across user sessions.
Another problem with existing systems relates to the basic need to effectively find specific components that provide services over the Web. In this regard, a “guideline” has arisen in which single components typically correspond to individual “business-level services”. A more technological solution has also arisen that makes a component available as a “Web Service”—the component documents its interface for the service in an XML (eXtensible Markup Language) structure. In this way, Web Service infrastructure technologies provide a distribution system for registered components. This type of documented component distribution system helps facilitate the consumption of component services, but still does not provide a model for developing applications that produce such services or that base their operation on consuming such services.
Accordingly, as described above, there is still a need for a new application program model for today's applications. Even using the above described existing systems, software application components are still significantly ‘context-free’ in terms of critical user, session, and transaction information. Components can be provisioned generally, but the users of the components are not known or accounted for in the provisioning process, and accordingly the use of the provisioned components cannot be accurately provisioned. Transactions supported in existing systems are still based on the traditional approach of coordinating access to shared data. Meanwhile, the scope of and functionality provided by contemporary applications has become increasingly user-centric. For example, most transactions are individualized to a user, as in, for example, many e-Commerce type systems. Moreover, a substantially increasing number of transactions are executed based at least in part on a specific user's stored properties, as in, for example, CRM (Customer Relationship Management), or “Trouble Ticket” type Web-based customer service systems. Existing systems leave users and transactions outside the scope of their components model, and each application must establish its own specific execution model. Software application developers must still custom-build application architectures, while the scope and functionality of their application programs becomes ever more demanding. Additionally, the underlying system engine for most applications still cannot predict application behavior in a useful way, across different applications. As a result, the management and optimization of system-level properties is still a complex, high-maintenance exercise, that must be specifically performed on an individual application basis.