Companies are rapidly adding dynamic, rich and interactive capabilities to improve user experiences, grow online audiences, and drive page views and transactions. As Web sites evolve toward completely rich, dynamic online channel experiences, businesses face a new, but stark challenge: this dynamic content cannot be cached and takes longer to load in a Web page. Today's consumers and businesspeople have come to expect highly personal and interactive online experiences. Whether they are making a purchase, booking a reservation or watching a movie, they demand a smooth, flawless experience and they will not hesitate to click to another site when their expectations go unmet. Sluggish site performance and slower page downloads can diminish the user experience and increase site abandonment. The result is lower customer loyalty and revenue.
Content Distribution Network CDN providers Internet are currently offering traffic acceleration services to address the issue of Quality of Experience QoE for Internet based services from regular browsing to e-commerce. An example of an acceleration offering is the EdgePlatform [see: Beyond Caching; The User Experience Impact of Accelerating Dynamic Site Elements across the Internet, 2008]. The EdgePlatform provides the insight into Internet traffic patterns and is the dynamic site acceleration platform for three critical technologies used to carry site content requests from the customer's browser to the company's origin data center and back—in an instant.
The below mentioned three technologies compensate for the inadequacies of BGP, TCP and HTTP protocol and effectively create a new Internet platform for today's dynamic online businesses.                SureRoute for Performance        Transport Protocol Optimization        Prefetching        
Traffic acceleration is based on a set of components. These include; Domain Name Server DNS system with global mapping function, a set of distributed acceleration servers and a Service level Agreement SLA between a Content Distribution Network CDN provider and portal provider (web application provider). The SLA also means a set of configurations on the portal provider's DNS server.
The following steps summarize the acceleration process:    1. CDN provider's dynamic mapping system directs user requests for application content to an optimal acceleration server.    2. Route optimization technology identifies the fastest and most reliable path back to the origin infrastructure to retrieve dynamic application content.    3. A high-performance transport protocol transparently optimizes communications between the acceleration server and the origin, improving performance and reliability.    4. The acceleration server retrieves the requested application content and returns it to the user over secure optimized connections.
The CDN providers understand that in a few years Internet will be mostly accessed via mobile broadband rather than via fixed broadband. For this reason they will like to be able to offer their services to their customers (content providers) in mobile networks, i.e. be able to perform acceleration of traffic for terminals connected to mobile networks.
FIGS. 1 (1a and 1b) discloses a system comprising an Internet Service Provider ISP network 14, an Internet network 15 and an operator's mobile network 16A. The system is an overlay mechanism that operates by receiving IP packets at one set of servers, tunneling these packets through a series of servers, and delivering them to a fixed, defined IP address. The ISP network 14 comprises a target server 12 that is an entity whose traffic is to be tunneled through the overlay mechanism. Edge servers are located in the ISP network and in the Internet network. An origin edge server 11 can be seen in FIG. 1 and is responsible for receiving, encapsulating and forwarding IP packets. The operator's mobile network 16A comprises a Gateway GPRS Support Node GGSN 6 and a Radio Network Controller RNC1 2. User equipment UE 1 is in radio connection via a base station with RNC1 2 in FIG. 1. Currently, the furthest deployment of traffic acceleration servers in the mobile networks that the CDN providers can offer is at the GGSN level. This is disclosed in FIG. 1a (the figure belongs to prior art) with an acceleration tunnel between the origin edge server 11 and the Gateway GPRS Support Node GGSN 6. However due to latencies existing below the GGSN [see: Latency in Broad-band Mobile Networks, C Serrano et al], the CDN providers will like to be able to go deeper into the mobile network. Deploying the accelerators at the RNC in 3G networks enables operators to be even closer to the end users, this way they can provide improved QoE to end users and thereby create a new offering to their customers the content providers. This is disclosed in FIG. 1b with an acceleration tunnel between the origin edge server 11 and the Radio Network Controller RNC1 2.
Latencies are mainly due to the characteristics of the access technologies in the different network segments. The values however will vary depending on the transmission technology (e.g. fiber, coaxial cable or microwave) and also on specific characteristics of a given operator's network. Quality of Experience QoE is a subjective measure of a customer's experiences with a technology or service. For Web browsing experience such as for example purchasing an airline ticket online, the measure of the responsiveness of the web portal application is of crucial importance for the success of the business transaction. In the worst case scenario, due the user's perception of the ‘slowness’ of using the web portal application, the end user can abandon the attempt to use the portal and this leads to a business loss for the portal owner.
A model is helpful to illustrate the potential performance bottlenecks in any application environment in general, as well as in a Web 2.0 environment in particular. A model discussed in [The 2009 Handbook of Application Delivery: A Guide to Decision Making in Challenging Economic Times, Dr. Jim Metzler] is a variation of the application response time model created by Sevcik and Wetzel [Why SAP Performance Needs Help, NetForecast Report 5084, http:/www.netforecast.com/ReportsFrameset.htm]. As shown below in an application Response Time Model, the application response time (R) is impacted by amount of data being transmitted (Payload), the WAN bandwidth, the network round trip time (RTT), the number of application turns (AppTurns), the number of simultaneous Transmission Control Protocol TCP sessions (concurrent requests), the server side delay (Cs) and the client side delay (Cc).
  R  ≈            Payload      Goodput        +                  (                              #            ⁢            ofAppsTurns                    ⋆          RTT                )                    Concurrent        ⁢                                  ⁢        Requests              +    Cs    +    Cc  
Lab tests disclosed in [The 2009 Handbook of Application Delivery: A Guide to Decision Making in Challenging Economic Times, Dr. Jim Metzler] show the effect latency would have on an inquiry-response application. As network latency is increased up to 75 ms, it still has little impact on the application's response time, but if network latency goes above 150 ms., the response time of the application degrades rapidly and is quickly well above the target response time.
Transmission Control Protocol TCP is today designed for low latency and high bandwidth networks with few communication errors. However, the standard TCP settings are not optimal for mobile networks, as mobile networks are characterized by high latency, low or medium bandwidth and more communication errors than in fixed networks. Therefore, TCP also includes a number of wireless extensions that can be used to maximize the throughput in mobile networks.
Wireless TCP standard settings have been recommended by Open Mobile Alliance OMA. The minimum window size required to maximize TCP performance is computed by Bandwidth Delay Product (BDP), where Bandwidth is the available bandwidth and Delay is the round-trip time of the given path [RFC1323]. The maximum window size is the minimum of the send and receive socket buffer. The receive socket buffer generally determines the advertisement window on the receiver. The congestion window on the sender further limits the amount of data the sender can inject into the network depending on the congestion level on the path. If the maximum window size is too small, relative to the available bandwidth of the network path, the TCP connection will not be able to fully utilize the available capacity. If the maximum window is too large for the network path to handle, the congestion window will eventually grow to the point where TCP will overwhelm the network with too many segments, some of which will be discarded before reaching the destination.
According to [Using Radio Network Feedback to improve TCP Performance over Cellular Networks, N Möller et al] TCP proxy improves performance in mobile networks for person to person as well as person to content services. The performance improvements are based on the reduction of latency (RTT). Architecture has here been proposed whereby Radio Network Feedback (RNF) is signaled to a TCP proxy for the selection of best settings TCP settings for egress traffic. The RNF relies on the radio resource management (RRM) algorithms, located in a Radio Network Controller RNC, which make use of information such as uplink interference, total downlink power and orthogonal codes to determine network condition. The problem of the existing solutions is that the last segment of the network, i.e. between the Radio Network Controller RNC (base station in some scenarios) and the end user still has significant latency (RTT).