Companies are continually trying to enhance revenue and profitability by improving the overall “Time-To-Market” of products or services. One critical component in improving Time-To-Market is through the empowerment of knowledge workers who participate in well-defined processes. The combination of people and processes are key to developing a successful business model, and the empowerment of knowledge workers is enhanced by providing access to the correct information at the right time within a process.
It is critical to leverage existing information systems when providing knowledge workers access to information. The cost of replacing these systems to facilitate a streamlined process is prohibitive. And the cost of modifying systems to accommodate changes related to continuous process improvements only accentuates the point. These information systems must be used with as little intrusion on the ability to upgrade, replace and utilize the application in its current state as possible.
Over the past 40 years companies have purchased and installed extremely robust information systems in an attempt to offer their knowledge workers the ability to work smarter within business processes. These systems include Manufacturing or Enterprise Resource Planning (MRP/ERP) systems, Product Data Management (PDM) systems, Relational Database Management Systems (RDBMS), and Computer Aided Design, Engineering and Manufacturing (CAD/CAE/CAM) systems. They usually address the primary task or role on an individual worker. As a result they are deployed on a tactical or departmental basis, and only facilitate knowledge workers of a specific genre who are creating information with applications. Even though others in the enterprise can benefit from the information available in these systems, they do not have access to it because the systems are often complex and difficult to learn how to use, and expensive to deploy beyond a single department.
Even when the data from these systems is available to knowledge workers throughout the enterprise, the assembly of useful information from multiple systems is difficult. Users are required to access different systems, often repeating the entry of the similar data into each one, and assemble the information manually, if it is found at all. As a result, the task is too often performed with a phone call to individual users of the multiple systems, which results in delays waiting for return phone calls and a high probability of incorrect information. As a result, the systems that have been installed to facilitate knowledge workers have done so only for individual tasks and have not improved the worker's access to timely information from an overall process point of view.
Another issue involved in making information available to knowledge workers on an enterprise wide basis is the administration of the systems. User names and passwords have to be maintained in every system for each individual who has access to the systems. That puts a burden upon the user also, in that they have to remember multiple user names and passwords that are often different on different systems. In addition, access to information in different systems based upon a user's role must be controlled to preserve the integrity of the data.
One approach taken to overcome this problem is to integrate the data available in applications to form a homogeneous data environment. That way relevant corporate information is available to users of any of the systems involved in the integrated system. The integration is accomplished by copying relevant information from the system it originated in to each of the others. The user interface of each system is modified to make the data available to the user of each system.
There are many problems inherent with the development of an integrated system. The difficulty and the cost of the initial development and maintenance of integrating systems increases exponentially as the number of systems increases since each system usually has its own proprietary application programming interface and data model. Maintaining the integrity of the data throughout the integrated environment is difficult and costly since the data is usually copied from one system to another. Each system user interface requires modification to provide access to new data. And this approach makes future changes to any single system extremely difficult as it affects many other systems.
The advent of the World Wide Web has brought forth several new technologies that have been used in an attempt the resolve the business problem defined above. For instance, Web technologies such as Netscape Navigator, Microsoft's Internet Explorer and NCSA's Mosaic allow a user to take a Web browser from a box, install it, and access pre-authored web pages from around the world. Recently, many organizations have adopted the use of web technologies for use within the enterprise. These internal networks are referred to as “intranets.”
Intranets commonly support electronic mail and access to static data such as company policies and financial reports, as well as access to data that exists on the external Internet. The use of certain web technologies has supported this growth and has recently begun to further the use of these intranets to include active-content pages, or delivery of data contained within enterprise systems to the user desktops. The method most companies are using to provide active-content pages for client users include four web technologies: HTTPD, web browsers, HTML and CGI.
The Hyper Text Transfer Protocol Daemon (HTTPD) is a web server process which runs on many industry-standard operating systems, and supports many industry-standard network protocols. The web server listens on a common network port for requests for data from a web client. When the requests are received, the web server locates an appropriate file stored locally on the server and then passes that file across the network to the web client for translation.
Web “browsers” are commercial off-the-shelf (COTS) applications that run locally on the client. Through the browser, clients make file requests of the web server, receive the file from the server through the network and then translate the file data into a presentation format for the user. One such browser in use today is the Netscape Navigator web browser.
The Hyper Text Markup Language (HTML) is a series of tags stored in a text file. These tags define how a web browser should display information to a user viewing an HTML file. Typically, the information consists of static text surrounded by HTML tags. HTML also offers the ability to insert images onto the page. Another, aspect of HTML is the ability to provide tags that point to another page. These tags, referred to as links, allow users to navigate through the World Wide Web, to a network of HTML pages. For example, a tag displays the words “Click here” to the user:                <A HREF=“http://www.mycompany.com/index.html”>Click here</A>Upon clicking on the text, the client browser would request that the index.html file be sent from the server (defined as “www.mycompany.com”) to the client for local translation and display. The location (http://www.mycompany.com/index.html) points to this specific file on a specific server. This file or page location is referred to as an address or Universal Resource Location (URL).        
Companies have recently started building active-content (or non-static) pages for display by their web clients. The Common Gateway Interface (CGI) is a standard that allows external programs to be written to perform a task (such as a query against a database) and then translate the results into HTML text. This type of solution utilizes a four-tier architecture including that of the client.
CGI programs reside on a web server. CGI programs are written utilizing a programming language such as ‘C’ or Perl. A web client launches the CGI program by a user selecting a URL or address for the CGI program (which appears to be the URL for another HTML page to the user). Upon receiving this request, the web server launches the CGI program that corresponds to the URL and then waits for the CGI program to return a data stream. This data stream sometimes contains information from a database or other system within the enterprise. Upon receiving the resulting HTML text from the CGI program, the server then passes the text to a web browser running on the web client. To the user, the text appears on the screen as usual (selected by a URL that supplied an HTML page). Most companies have adopted the use of the CGI architecture for their active-content intranet solutions.
Although the CGI approach has given organizations access to system data that addresses the issue of providing knowledge workers with information, there are several scalability constraints with the architecture. A CGI program is usually only written to provide specific actions against a specific system's data. A CGI program is also written to produce a single HTML screen as its result. Application developers must write programming code with intimate knowledge of an application's programming interface or access language (such as Structured Query Language). Each time a developer wishes to change a screen, the CGI program must be modified and recompiled. Additionally, each time the developer wishes to change the action(s) against the data, the CGI program must be modified and recompiled. Writing CGI programs requires in-depth knowledge of a programming language such as C or Perl in addition to knowledge of the system's Application Programming Interface (API). This is inherently frustrating to intranet developers since a basic premise behind the HTML language is ease of development and flexibility. CGI also presents potential security problems when uncompiled source code must reside at a customer site or in publicly accessible directories in the case of Perl-based applications.
Attempting to provide interoperability, or access to data from more than one system, in this CGI-based approach, further extends the problems of scalability. It requires the interfacing to multiple APIs and data models, increases administrative maintenance in each system in order to provide secure access to data, and makes it difficult to implement system upgrades.
The basic intranet technologies provide a mechanism for the distribution of information to an enterprise and beyond. These technologies have proven to be easy to use so they are ideal for providing a common user interface to enterprise information. However these technologies need to be extended. There is a need for a web-based intranet technology that:                Provides access to current information systems in a non-intrusive way.        Provides a common mechanism to access data (using an application API, database access or application user interface) at the HTML development level.        Provides easy access to and manipulation of data contained in multiple information systems in one HTML page.        Controlled access to information based upon a user's role, while controlling the administrative costs of managing large numbers of users accessing multiple information system.        Does not require significant maintenance costs to incorporate updates to individual information systems.        Does not require programming code to be modified and compiled each time the functionality or format of a display must be changed.        Does not require programming code to be modified and compiled to change data access.        Does not require any additional programs or other installation requirements at the client beyond a Web browser.        