With development of Internet economy, new services such as E-commerce, enterprise information system out-sourcing, and mobile Internet are increasing. These services promote market requirements for a data center, so that a data center technology develops rapidly and new technologies emerge endlessly.
A data center generally refers to an application environment in which an entire set of integrated information technology of centralized processing, storing, transmitting, switching, and managing of data information is implemented in physical space. Key devices in a data center equipment room include computer devices, server devices, network devices, storage devices, and the like. With development of user requirements, the scale and networking complexity of a data center increase continuously, and diverse information technology (IT) applications impose a higher requirement for a data center network.
Multiple types of value-added service devices are deployed in a current data center network. In the prior art, a service chaining solution resolves a problem of flexibly deploying a value-added service device in a data center network. Referring to FIG. 1, in an existing service chaining solution, a controller, a delivery node, and service nodes are included, where the service nodes are value-added service devices, the delivery node is deployed before server 1 and server 2, and service node 1 and service node 2 are directly connected to the delivery node. The delivery node and the service nodes all are configured by the controller. The delivery node determines which data flows from a client or the servers need to be transmitted to the service nodes to be processed, and determines which service nodes to which the data flows need to be transmitted to be processed. In the data center, for each access process, there is a problem of flows in both directions, for example, an uplink flow from the client to the server and a downlink flow from the server to the client. Due to requirements of service processing on a service node, symmetric processing generally needs to be performed on flows in both directions in a service chain.
In the prior art, a process of processing an uplink data flow in a service chain is as follows: a controller first sends a service chain configuration parameter to a delivery node; when a client initiates access to a server 2, the delivery node receives a first data packet that is sent by the client and that matches the service chain configuration parameter, and first sends the first data packet to a service node 1 to perform processing; after completing processing the first data packet, the service node 1 sends a second data packet obtained after the processing to the delivery node; then, the delivery node sends the second data packet to a service node 2, and after processing the second data packet, the service node 2 sends a third data packet obtained after the processing back to the delivery node; finally, the delivery node sends the third data packet to the server 2. A process of processing a downlink data flow in the service chain is similar to that of the uplink data flow.
It is found that: in the prior art, after each service processing, a data packet obtained after processing by a service node needs to be first returned to a delivery node, and the delivery node sends the data packet to a next service node, that is, the delivery node implements centralized control on a data flow direction. Because a data packet needs to pass through a delivery node repeatedly for multiple times, if a service chain includes a relatively large quantity of service nodes, the data packet processing efficiency is very low.