Development of new and better software applications has proliferated in recent years, largely because of the growth of large-scale computing, computer networks, and increasing interest in the “cloud.” The term “cloud computing” generally characterizes a computing environment where a substantial number of computers are interconnected over a large data network, such as the Internet. This is an ideal environment for network-based services, and many business entities take advantage of this arrangement to avail themselves of “software as a service,” or SaaS, in which software applications and data are hosted by a remotely-located computer (or group of computers) accessible over the Internet.
This approach permits subscribers to the service to use a “thin client” at their local site that depends heavily on a fully provisioned server, connected to the client through the Internet, in what is known as a “client-server” relationship. The client-server model is a popular configuration for networked computing in which the remotely-located computer, or server, is designed to share software and data with a local client that simply needs to establish contact with the server to use its software and data resources.
Client-server is a relatively simple arrangement that enables a local computer to take advantage of remotely available resources. One of the hurdles that must be cleared in distributed computing environments, however, is the fact that many desirable software applications were originally written for use with specific operating systems and may not be easily transported (or “ported”) to a different operating system or to a host computer that uses a different processor. Data formats may also be unique to particular applications, and consequently incompatible with others. Thus, behind the scenes of distributed computing, there is an ongoing effort to overcome compatibility issues through application integration. In enterprise environments, achieving operational harmony among disparate applications and data formats is often accomplished through “middleware.”
In simple terms, middleware is computer software that resides between the operating system for a particular platform and application software providing desirable functionality. Middleware's primary purposes are facilitating communication and input/output (I/O) operations among applications. Since the above-cited incompatibilities among disparate applications and data structures are also regularly encountered outside the enterprise setting, even in client-server operations, enabling proper communication and I/O is a task toward which a great deal of development work has been directed.
REST, or Representational State Transfer, is an architectural style that is considered the underpinning of the World Wide Web. Because of this relationship with the way in which the web operates, REST is frequently employed in distributed computing applications, particularly when web services are involved. In fact, client-server is one of the formal REST interaction constraints applied to resources (components, connectors, and data elements). Another constraint that characterizes systems that follow REST architectural principles is statelessness. In other words, no information associated with a client context can be stored on the server between requests. After all, a network of web pages is nothing but a virtual state machine. A user navigates his way through a web application by a sequence of link selections, where each link selection invokes a state transition. Content from the page to which the hyperlink points is then presented to the user.