When a packet is delivered between a data forwarding device and a service processing node, a request packet and a response packet need to pass through the data forwarding device twice. This increases the bandwidth requirement for the data forwarding device. For example, if the bandwidth requirement for the client and the server is 10 G, the bandwidth for the data forwarding device needs to be 20 G. During a service processing procedure, however, for certain service flows, after protocol recognition or service processing for an uplink request of a terminal, subsequent uplink packets and downlink packets do not need to continue to be forwarded to the service processing node or forwarded back by the service processing node, and may be directly forwarded by the data forwarding device to the terminal and a device of a service provider (Service Provider, SP), thereby reducing the bandwidth requirements and processing pressures for the data forwarding device and service processing node.
In the conventional art, however, data is forwarded by configuring static policies, where the destination IP address and Port rules are configured in advance, and uplink and downlink data packets that comply with static rules are directly routed (that is, data is forwarded by using a high-speed channel). The control manner for forwarding policies is not flexible enough, and there are a large number of restrictions during application, which do not facilitate wide use.