1. Field of the Invention
The present invention relates, generally, to computer networks and communication systems and, more particularly, to the transport and processing of data in computer networks and communication systems.
2. Discussion of the Background
The explosion in the use of wired and wireless computer networks and communication systems in almost every aspect of day-to-day business operations and personal life has created an insatiable and, indeed, necessary demand for increased speed, reliability, and security in the transmission and processing of data in computer networks and communication systems. Computer networks and communication systems must enable the applications and users utilizing those networks and systems to transport and process data with the speed and, more particularly, end-to-end response times, reliability, and security which are, in most cases, critical to acceptable system, application, and user operation. Moreover, the increased functionality and robustness of today's systems and applications, and continued demand for additional features and functionality, as well as the lack of uniform standards adopted and implemented by the divergent devices, applications, systems, and components communicating in operation of such systems and applications have led to significant deterioration in these critical performance factors—i.e., speed/end-to-end response times, reliability, and security.
Most conventional approaches directed to increasing data transmission and processing speeds, and the reliability and security of such transmissions and processing, have focused on hardware solutions, such as deploying faster processors (i.e., CPUs) and increasing bandwidth by upgrading transport media and associated transmission hardware. The evolution of these attempted solutions to address the transmission and processing performance problems can be traced through the developing standards associated with going from 300 baud dialup modems up through the 56000 baud dialup modems, as well as through the evolution of routers/switches moving from 10 MB up to 1 GB throughput. Processor speeds have also ranged from the original 4.77 MHz up through 1.5 GHz. Such solutions, however, have inherent limitations in the performance increases possible. Most notably, the typical “bottlenecks” leading to limitations in data transport and processing speeds in computer networks and communication systems are not the hardware being utilized, but the software and, more particularly, the software architecture driving the transport and processing of data from end point to end point.
Traditional transport software implementations suffer from design flaws, lack of standardization and compatibility across platforms, networks, and systems, as well as the utilization and transport of unnecessary overhead, such as control data and communication protocol layers. These drawbacks are due, in large part, to a lack of industry agreement on a universal protocol or language to be used in the overall process of transporting data between a message source and a message destination. With reference to FIG. 1, which is a representation of the layer structure of the Open Systems Interconnection (OSI) model for communication between computer systems on a network, while standards have been established and generally accepted by the industry for network access—i.e., the physical, data link, and network layers—and most all systems and applications provide for communication using Transmission Control Protocol/Internet Protocol (TCP/IP)—i.e., IP running at the OSI network layer and TCP running at the OSI transport layer—, there is severe fragmentation and lack of industry adoption and agreement with respect to a protocol or language for interfacing with TCP/IP and the layers above the transport layer in the OSI model—i.e., the session, presentation, and application layers.
As a consequence of this lack of a universal protocol or language, numerous and varying protocols and languages have been, and continue to be, adopted and used resulting in significant additional overhead, complexity, and a lack of standardization and compatibility across platforms, networks, and systems. Moreover, this diversity in protocols and languages, and lack of a universal language beyond the transport layer, forces the actual data being transported to be saddled with significant additional data to allow for translation as transmission of the data occurs through these various layers in the communication stack. The use of these numerous and varying protocols and languages such as, for example, HTTP, WAP/WTP/WSP, XML, WML, HTML/SMTP/POP, COM, ADO, HL7, EDI, SOAP, JAVA, JDBC, ODBC, OLE/DB, create and, indeed, require additional layers and additional data for translation and control, adding additional overhead on top of the actual data being transported and complicating system design, deployment, operation, maintenance, and modification.
These deficiencies in such traditional implementations lead to the inefficient utilization of available bandwidth and available processing capacity, and result in unsatisfactory response times. Even a significant upgrade in hardware—e.g., processor power and speed, or transport media and associated hardware—will provide little, if any, increase in system performance from the standpoint of transport speed and processing of data, end-to-end response time, system reliability and security.
With the explosion in the use of web-based protocols, yet another major deficiency has emerged in current implementations as a result of the combination of both transport/communication state processing and application/presentation state processing. Many of the protocols, such as XML and SOAP, promote the merging of these two fundamentally opposite technologies. This merging has the effect of increasing transport and application complexity in both the amount of handshaking and the amount of additional protocol data that is required. As computer networks and communication systems continue to grow, with the addition of more devices, applications, interfaces, components, and systems, the transport and application complexities caused by merging transport/communication state processing and application/presentation state processing will grow to the point that all network and system resources will be exhausted.
Another challenge for the current momentum of the industry is adopting functionality to the emerging wireless communications industry. The wireless devices used for this industry are small, with limited CPU capacity and limited onboard resources. The wireless bandwidth currently available to these devices is also very limited and can be of an unstable variety in which the signal is fluctuating. The current average speed of the following technologies are noted: CDPD Modem=19.2 kb/sec; RF wireless LAN=11 mb/sec. The industry's future expansion cannot rely on software technologies that exhibit major inefficiency in either processing or bandwidth. An example of this is in the wireless industry's unsuccessful adoption of web-based technologies. These include, for example, business-to-consumer and business-to-business information and transaction processing (e-commerce). Early software projects in the wireless industry are producing unacceptable results and a very low level of customer satisfaction. This is due to the fact that these technologies are currently having functional performance problems because of their higher bandwidth and substantially higher CPU requirements. The use of these wireless solutions for internal business functions has been limited due, in large part, to an absence of cost effective, real time wireless applications that function with 100% security and reliability. The momentum of the wireless industry is failing to penetrate most of these markets.
Another challenge for the current momentum of the industry is adopting functionality to legacy or mainframe systems. Most primary internal business functions are currently performed using proprietary application software that runs on these legacy systems. These systems are, in many cases, based on older style architectures that were designed to efficiently use the limited bandwidth and onboard computer resources that were present when technologies were first developed. Many of the current development efforts in applying these inefficient technologies, such as web-based, into technologies that require high efficiency are producing systems that do not provide adequate reliability or security for performing business critical functions. These systems are not fast enough to perform functions in real time as they add additional layers of processing that complicate and slow down the business functions. Therefore, organizations are reluctant to apply these technologies to their mission critical internal business functions.
Another approach taken in an effort to address the system performance deficiencies described above involves a change in fundamental system architecture from a two-tier client/server configuration to a three-tier client/server configuration. Three-tier client/server applications are rapidly displacing traditional two-tier applications, especially in large-scale systems involving complex distributed transactions. In two-tier systems, the client always handles data presentation, and the server manages the database system. The primary problem with the two-tier configuration is that the modules of the system that represent the business logic by applying, for example, business rules, data validation, and other business semantics to the data (i.e., business services) must be implemented on either the client or the server. When the server implements these modules that represent the business logic (i.e., business services, such as business rules, by using stored procedures), it can become overloaded by having to process both database requests and, for example, the business rules. However, if the client implements the business rules, the architecture can easily grow into the monolithic application reminiscent of the mainframe days.
The three-tier client/server architecture provides an additional separation of the business logic from the database and the actual presentation. FIG. 2 is a functional block diagram of a traditional three-tier model illustrating the usual subsystems in a prior art three-tier system. Referring to FIG. 2, a three-tiered client/server system 10 includes a user services subsystem 12, a business services subsystem 14, and a data services subsystem 16. The data services subsystem 16 performs the function of loading and storing data into one or more databases. The business services subsystem 14 is responsible for using the data services code to manipulate the data. The code in the business services subsystem 14 attaches business rules, data validation, and other business semantics to the data. The user services subsystem 12 is the end-user application that exposes the graphical interface to the user. The code in the user services subsystem 12 is a client of the business services subsystem 14. The business services subsystem 14 applies business semantics to the code before it reaches the end user through the user services subsystem 12. This approach prevents the user from modifying the data beyond the constraints of the business, tightening the integrity of the system.
FIG. 3 illustrates the development tools for each subsystem in the prior art three-tier client/server system 10. Visual Basic and Visual C++20  are examples of tools available for constructing user interfaces. Transaction server 22, such as the transaction server product available from Microsoft Corporation referred to as Microsoft Transaction Server (MTS), is a development tool that can be used to implement the business services subsystem 14 and to control communication among the three subsystems. SQLServer 24, which is a database system available from Microsoft Corporation referred to as Microsoft SQL Server, is an example of a database system that could be used and implemented to support the data services subsystem. FIG. 4 illustrates an example of a prior art three-tier client/server system implemented in a computer network.
In a traditional three-tier architecture, a framework of services, sometimes referred to as middleware, are provided that enable the separation of the business logic from the database and the actual presentation. This middleware is software that sits between the business applications and the hardware and operating systems. Middleware, such as, for example Microsoft Corporation's Microsoft Transaction Server (MTS), provides a host of functionality that simplifies the creation, deployment, operation, and maintenance of large-scale client-server systems. Some of the services provided and functions performed by middleware, such as MTS, are as follows:                client access to heterogeneous, distributed data stores (i.e., access to data contained in, for example, legacy systems, desktops, and servers), and control and management of access to distributed data through distributed transactions;        coordinating concurrency between multiple simultaneous users, communication between all subsystems from the database to the client application;        coordinating and monitoring the transactional state of components as they interact with various transactional systems, such as databases;        acknowledging requests for object creation from remote clients and coordinating the creation, maintenance, and destruction of COM component instances and the threads that execute inside them;        optimizing use of server resources, such as threads, objects, processes, and database connections, by creating a pool of resources and sharing them with multiple clients;        controlling access to components at runtime;        enabling efficient changing of client/server configuration during and after deployment, without the need to change system code;        insulating the applications from unique hardware and operating system interfaces. This approach improves the application's reusability and helps attain platform independence (at lease on the server side).        
Referring to FIG. 5, a prior art three-tier client/server system 30 includes a plurality of clients 32 communicating with a Microsoft Transaction server 34. The MTS server 34 communicates with a database server 36 for storing data in and retrieving data from a database 38. The MTS 34 pools database connections 40 enabling potentially hundreds of components (and hence hundreds of clients 32) to access the database 38 with, for example, only a dozen database connections 40. This results in a reduction in demand for server resources such as database connections, as compared with a two-tiered client/server architecture which requires a database connection for each client. The resulting reduction in demand for server resources translates into a more efficient and scalable system.
However, while adoption of a three-tiered client/server architecture and ability to utilize middleware providing the additional services and functionality described above represented a major advance in increasing system efficiency from the standpoint of system creation, deployment, operation, and maintenance, neither the architecture nor the middleware provide any services or functionality directed to accelerating data transport and processing (i.e., decreasing end-to-end response time), and improving the reliability and security of data transport. Such systems, designed using a three-tiered architecture and implemented using middleware such as MTS, still suffer from the limitations and drawbacks associated with the software driving the transport and processing of data from end point to end point—i.e., design flaws, increased complexity, lack of standardization and compatibility across platforms, networks, and systems, as well as the utilization and transport of unnecessary overhead, such as control data and communication protocol layers, as discussed above.
Thus, notwithstanding the available hardware solutions, transport software implementations, architectures, and middleware, there is a need for a system, method, and computer program product that provides increased speed, reliability, and security in the transmission and processing of data in computer networks and communication systems. Further, there is a need for a system, method, and computer program product that provides such increased speed, reliability, and security, (1) that can optimize and accelerate data transport and processing, (2) that can more efficiently utilize existing bandwidth in communications systems and computer networks, (3) that is highly scalable, extensible, and flexible, (4) that can seamlessly integrate with any hardware platform, operating system, and any desktop and enterprise application, and (5) that can be implemented on any wired or wireless communication medium.