Computers communicate with each other through communication channels, such as through channels that are implemented by networks. Much simplified, a source computer (or “sender”) sends data to a target computer (or “receiver”). Source and target computers are roles or functional descriptors to indicate the direction of communication. Usually, the communication is bidirectional: a particular computer can act as a source computer and act as target computer at the same time.
Taking separation of concerns into account, the computers communicate because the computers perform distributed computing with specialized functions. According to the functions, the computers are configured differently. For example, a first computer executes a first application (such as a browser application to interact with a user), and a second computer performs a second application (such as a database application to store and provide data for the user). Both computers act as source and as target. The first application causes the first computer (i.e., in the source role) to send data, the second application causes the second computer (i.e., in the target role) to receive the data. The second application causes the second computer (i.e., this time in the source role) to process the data and return processed data to the first computer (i.e. this time in the target role). Such or similar combinations of computers (and of applications) are frequently referred to a as “client/server arrangement”.
The first computer and the second computer are often physically remote to each other. For example, while the first computers are located in rooms, in vehicles, or other places suitable for humans, the second computers are located in places such as server farms, computer data centres etc. In the art, the term “cloud computing” is frequently used to describe such approaches, usually in view of the second computers.
From a broader perspective, the second computers execute their applications with the goal to provide services to users of the first computers. Examples for such second applications are business applications, database applications, traffic information applications, document handling applications, etc. Frequently, the second applications are provided in so-called software-as-a-service (SaaS) scenarios.
With respect to the number of users and the number of computers, different scenarios can be distinguished, such as:                Many to one (asymmetrically). For example, there can be a plurality of first computers, most of them with single users: mobile devices (such as “smartphones”), personal computers (PC) etc. A second computer provides services to the plurality of users through the first computers. The applications running on the computers can be adapted accordingly: in the first computers (i.e. in plural), browser applications focus on user-computer interaction to communicate data to and from the user; in the second computer (i.e. in singular), the application processes the data. Simplified, data processing can comprise receiving data, modifying data, storing the modified data and sending the modified data.        One to One (symmetrical). For example, the first and second computers exchange data to provide redundancy.        Many to Many scenarios describe situations, for example, with multiple client computers and with multiple server computers, with the server computers having different functions (e.g., application, database) and various redundancy arrangements.        
Letting computers communicate with each other in such or similar scenarios, dictates a number of requirements (and/or constraints), among them, simplified:                Security is required to prevent non-authorized access to the data, accidental interception of the communication, interception on malicious purpose, eavesdropping etc. Security measures are especially applicable to sensitive data (i.e. to data that is potentially of value for attackers, interceptors etc.). Hereby, the basic security concepts of confidentiality, integrity, availability and non-repudiation need to be achieved.        Scalability is required for computational resources, especially of the second computer(s) that store and process data, to provide services for a number of first computers, with the numbers of the first computers being variable.        Adaptability is required to accommodate changes, especially in the application running at the second computer(s). Adaptability is related to complexity.        
The requirements can be conflicting with each other. To address some of the requirements, intermediate computers—such as gateway computers (or proxy computers)—participate in the communication by further processing data. In a typical scenario, an intermediate computer receives data from a first computer (acting as the source computer), pre-processes the data, and sends the pre-processed data to the second computer (acting as the target computer). In response, the intermediate computer receives data from the second computer (source), pre-processes the data as well and forwards the data to the first computer (target).
More in detail, gateways that participate in the communication can contribute to                security, for example, by adding authentication and/or authorization, by encrypting and/or decrypting data, by scanning for malicious software (e.g., computer viruses, computer worms), by data leak prevention (DLP) measures etc.        scalability, for example, by caching data (i.e., preliminary storing) for re-use, re-routing, load-balancing, and        adaptability, for example, in being adaptive themselves.        
However, conflicts are possible: for example, a semi-adaptive gateway that starts to provide encryption/decryption (as a service to the communication between source and target) but that keeps plain data (i.e., non-encrypted data) in a cache would eventually cause undesired data-leakage or the like. Changes—such as the adaptation of security settings—are potentially applicable to all computers, not only to the first and second computers, but also for the intermediate computers (gateways). Further, the frequency of change can be different for each computer. Taking the asymmetric scenario (“many to one”) as an example, the first computers (executing the browser applications) are less frequently in need for changes than the second computer (executing, for example, the SaaS application). Changes often relate to the structure of the data by that the computers communicate. The intermediate computers would have to accommodate the changes at the higher change frequency (i.e. that of the second computers).
Security, scalability, adaptability and other constrains (or requirements) influence each other so that the overall complexity increases. Further, intermediate computers are potentially in inter-communication between multiple second computers, executing different service applications (many to many). Changes further increase complexity. There is an overall technical problem to comply with the mentioned constrains or requirements without further increasing complexity.