In the data communication field involving computers and networking, there is a basic concept of the "dialog", which in computing circles, involves the exchange of human input and the immediate machine response that forms a "conversation" between an interactive computer and person using it. Another aspect of the "dialog" is the reference to the exchange of signals by computers communicating on a network. Dialogs can be used to carry data between different application processes, and can be used to carry data over computer networks. In computer networking, dialogs can be considered to provide data communication between application processes running on different systems or different hosts. Further, dialogs can carry data between application processes running on the same host.
There is a generally recognized OSI (Open System Interconnection) standard for worldwide message transfer communications that defines a framework for implementing transfer protocols in 7 layers. Control is passed from one layer to the next, starting at the layer called "the application layer" in one station, proceeding to the bottom layer, over the channel to the next station, and back up the layers of a hierarchy which is generally recognized as having 7 layers. Most of all communication networks use the 7-layer system. However, there are some non-OSI systems which incorporate two or three layers into one layer.
The layers involved for network Users are generally designated from the lowest layer to the highest layer, as follows:
1. The Physical Layer; PA1 2. The Datalink Layer; PA1 3. The Network Layer; PA1 4. The Transport Layer; PA1 5. The Session Layer; PA1 6. The Presentation Layer; and PA1 7. The Application Layer.
The Application Layer 7 (top layer) defines the language and syntax that programs use to communicate with other programs. It represents the purpose of communicating. For example, a program in a client workstation uses commands to request data from a program in a server. The common functions at this Application Layer level are that of opening, closing, reading and writing files, transferring files and e-mail, executing remote jobs, and obtaining directory information about network resources.
The Presentation Layer 6 acts to negotiate and manage the way the data is represented and encoded between different computers. For example, it provides a common denominator between ASCII and the EBCDIC machines, as well as between different floating point and binary formats. This layer is also used for encryption and decryption.
The Session Layer 5, coordinates communications in an orderly manner. It determines one-way or two-way communications, and manages the dialog between both parties, for example, making sure that the previous request has been fulfilled before the next request is sent. This Session Layer also marks significant parts of the transmitted data with checkpoints to allow for fast recovery in the event of a connection failure. Sometimes the services of this session layer are included in the Transport Layer 4.
The Transport Layer 4, ensures end to end validity and integrity. The lower Data Link Layer (Layer 2) is only responsible for delivering packets from one node to another). Thus, if a packet should get lost in a router somewhere in the enterprise internet, the Transport Layer will detect this situation. This Transport Layer 4 ensures that if a 12 MB file is sent, the full 12 MB will be received. OSI transport services sometimes will include layers 1 through 4, and are collectively responsible for delivering a complete message or file from a sending station to a receiving station without error.
The Network Layer 3 routes the messages to different networks. The node-to-node function of the Datalink Layer (Layer 2) is extended across the entire internetwork, because a routable protocol such as IP, IPX, SNA, etc., contains a "network address" in addition to a station address. If all the stations are contained within a single network segment, then the routing capability of this layer is not required.
The Datalink Layer 2 is responsible for node-to-node validity and integrity of the transmission. The transmitted bits are divided into frames, for example, an Ethernet, or Token Ring frame for Local Area Networks (LANs). Layers 1 and 2 are required for every type of communication operation.
The Physical Layer 1 is responsible for passing bits onto and receiving them from the connecting medium. This layer has no understanding of the meaning of the bits, but deals with the electrical and mechanical characteristics of the signals and the signaling methods. As an example, the Physical Layer 1 comprises the RTS (Request to Send) and the CTS (Clear to Send) signals in an RS-232 (a standard for serial transmission between computers and peripheral devices) environment, as well as TDM (Time Division Multiplexing) and FDM (Frequency Division Multiplexing) techniques for multiplexing data on a line.
It will be seen that present-day communication systems generally will have a high band-pass capability of data throughput for high speed network technologies which may occur at rates on the order of 100 MB per second, to 1 gigabit per second.
However, sometimes the problems of delays or latency may be high. Latency is generally considered to be the time interval between the time a transaction issues and the time the transaction is reported as being completed. In certain systems having a high latency, the round-trip time for two clients communicating with each other to complete a data request can be on the order of milliseconds.
The delays in communication due to "latency" will be seen to occur from conventional communication systems due partly to overhead in the communication layers, and generally is especially due to latency in the layers below the Transport Layer 4, i.e., Layers 3, 2 and 1. In high speed data communication systems, the Transport Layer 4 is still seen to impart substantial latency in communications.
The present Network Data Path Interface 30 (FIGS. 1, 2) method and system describes the functions and sequential operations for dialog messages between a Network Provider 20 and I/O 40. This enhances the speed of dialog exchanges and this improves communication system performance.
FIG. 3B shows a diagram of the major components of part of a datacom system illustrating the new (30) and the earlier (20b) interfaces for the Network Data Path. Referring to FIG. 3B, the top block is the DSS 10 (Distributed System Service). Connected to this, is a port file 14 and a synchronous port CB,16. Also connected to the DSS 10, is a cooperative System Interface (with a Connection Library) 12, which connects to the Network Provider 20. Also interfacing to the Network Provider 20 are the PIE's Connection Library Element (Process Intercommunication Elements) 18c and 18b of which 18c connects using a Connection Library (CL), and the other (18b) which connects using a Connection Block (CB).
A Distributed Application Supervisor (DAS 22). connects to the Provider 20, while also providing output to a path input control 23i (for prior path CB), and a Supervisor CB/CL control 23s. These last two blocks feed to the Network Processor Support module 35, which provides an output to the Logical I/O (LIO 34) and to the Direct Interface 32, and thence to I/O 40, whereby the Physical I/O 40 is a Simulation "Gatherer" to provide output to: the Integrated Communication Processor 42; or the Emulated Integrated Communication Processor 44; and/or to the Direct Integrated Communication Processor 46, which provide communication to Channel Adapters (CA) in a Network Interface Card (NIC) of Network Processor 50. The DAS 22 communicates with software in the Network Processor Environment 50.
The Network Processor Environment designated 50 of FIG. 3B is an architectural drawing showing the software contents of a Network Processor which provides the system with a Control 56, Path Subsystem (PSS) 54, and Protocol Stack Extension Logic (PSEL 52). This Network Processor environment includes processors 42, 44, 46.
NETWORK DATA PATH INTERFACE (30 FIG. 2):
Currently, Unisys Corporation's computer architecture supports two interfaces to the Network Providers--the standard user-visible interface through the port files, and--a system software synchronous interface called Sync.sub.-- Ports. Sync.sub.-- Port users can avoid copying incoming data in certain cases and can make decisions about where to copy it--because they are allowed to look at the data before copying.
The Sync.sub.-- Port interface can also be used to eliminate processor switching in the input data path for certain applications. Often though, the strict rules about what could be processed in-line as part of notification of input--resulted in the process switch merely being moved into the Sync.sub.-- Port user's code.
The BNA and the TCP/IP type Network Providers provide the Sync.sub.-- Port interface. (which is used primarily by COMs.sub.-- PSHs and the Unisys-supplied DSS's), with a performance boost.
The Cooperative Services Interface (12) of FIG. 2 and FIG. 5, provides an additional performance benefit over the Sync.sub.-- Ports by allowing a Network Provider (20) and a DSS (10) to bypass the Port File code in the Master Control Program (MCP), by allowing it to share data and by relaxing the rules about what can be performed as part of an "input" notification.
The interface between the MCP's Port File code and the Network Providers (the PIE interface) was earlier implemented as an old-style Connection Block 18b, (CB), FIG. 3B, so that by changing this to a "Connection Library" (CL), 18c, this provided a performance advantage by eliminating the MCP overhead required to access the entry points exported via Connection Block (CB).
Because Connection Libraries (CL) can export data items in addition to procedures, this change also allows for the Port File Code and the Network Providers to share dialog-oriented locks. Such sharing allows the elaborate lock/deadlock avoidance code, previously employed, to be simplified greatly, thereby not only improving performance, but also closing numerous of the timing windows. Sharing locks in this way also obviates the need for several of the more complex interfaces in the priorly used interface.
The Unisys E-mode based portions of the Network Providers (20) were previously enabled to communicate with their ICP-based components via an interface provided by Network Processor Support, 35, FIG. 4. Network Processor Support 35 provided a complex path Connection Block (CB) interface which the Network Providers used to get the data they wished to send into an I/O capable buffer, and the Network Processor Support generated and parsed the QSP (Queue Service Provider) protocol, in order to "multiplex" the numerous dialogs (that the Network Providers had) over a single Physical Unit Queue.
In the new, improved architecture, "multiple queues" are now provided between the Unisys E-mode environment and a given Network Processor Environment, thus obviating the need for this previous multiplexing function and eliminating the de-multiplexing bottleneck on the Network Processor/Controller stack on the input.
Since the QSP (Queue Service Provider) protocol generation is very simple, that function is now moved into the Network Provider, 20. This re-distribution of function allows the Network Processor Support 35 (FIGS. 3B and 4) to transmit operations which are now accomplished by means of a Read/Write directly (Direct Interface 32) to the Physical I/O procedure 32 (FIG. 3B) providing transport to the Channel Adapter environment except in the case of the old Integrated Communication Processors (ICPs), where multiple queues still must be simulated in the Network Processor Support 35, FIGS. 3B and 4.
To avoid the necessity of copying data in order to assemble Network Provider-generated header data, and data from multiple-use buffers into one contiguous memory area, the ability to "Gather" data from multiple buffers on the output is added to the I/O processor in IOM 40. The Physical I/O 40 simulates "Gather" in cases where the I/O processor does not support it directly.
Additionally, a "Scatter" feature is provided so that a single incoming data message can be split across multiple buffers. This is used by the Network Provider(s) 20 to ease their memory management problems, they thus have a consolation code channel (in Network Provider 20, FIG. 3B) to cope with the cases where Scatter is not provided by the I/O processor.
As a result of the improvements to the Network Data Path Interface 30, FIG. 2, 3A there is a reduced need to copy data, throughput performance is enhanced, more transmissions can occur simultaneously by reducing routing overhead at destination end points, there is a greater capacity for multi-threading and the protocol stacks can more efficiently handle the use of buffers.