Web page transmission, in which a user selects web page content and receives objects, is a core part of the Internet experience for Internet users. While the experience of users is typically a single selection followed by the viewing of a web page that is presented on the screen, the process of presenting the web page on the screen can involve a large number of objects and multiple request/response round trip communications from the user system to a system that is providing the web page.
Transmission Control Protocol (TCP) is one of the core protocols of the Internet protocol suite, complementing the Internet Protocol (IP) to make up the TCP/IP protocol suite. TCP is structured to provide reliable ordered delivery of a stream of data from a program on one computer to another program on another computer. Hypertext transfer protocol (HTTP) is a protocol that may then use TCP to transfer objects in various web page transactions and transfer of objects over network connections. TCP emphasizes reliability as opposed to, for example, the User Datagram Protocol, which emphasizes reduced latency over reliability. In certain environments, the structure of TCP in emphasizing reliability may slow the experience of a web browser.
For example, TCP uses a “slow start” algorithm to control congestion inside a network. Slow start functions to avoid sending more data than a network is capable of transmitting. “Slow start” is also known as the exponential growth phase of a TCP communication. During the exponential growth phase, slow-start works by increasing the TCP congestion window each time the acknowledgment is received. It increases the window size by the number of segments acknowledged. This happens until either an acknowledgment is not received for some segment or a predetermined threshold value is reached. If a loss event occurs, TCP assumes that it is due to network congestion and takes steps to reduce the offered load on the network. Once the threshold has been reached, TCP enters the linear growth (congestion avoidance) phase.
One method of improving the performance of web page transmission and presentation is HTTP prefetching. HTTP prefetching typically involves pre-requesting content on behalf of a client or browser before a request for that content is actually generated as a typical HTTP request and response in the course of a typical web page transaction. Certain prefetching embodiments involve pre-requesting content based on predictions about a future user selection without any actual action or selection by the user. Other HTTP prefetching systems, such as the systems discussed here, involve pre-requesting content in response to a user action or selection as part of a web page transaction. In such systems, when content is prefetched it may become possible to satisfy the request for that content locally (with regard to the client or browser) or at a location with a lower latency to the user, thereby negating the need to transmit the request and wait for the response from a content server. For example, in cases where there exists high latency between the client generating the request and the server which responds with the content requested, each negated request/response may avoid the penalty for such latency, thereby potentially reducing the total time required to satisfy the entire series of requests for the client. This may result in an accelerated end user experience.
The use of prefetching, along with other improvements to reduce the waterfall effects of chained requests in a web transaction, have created circumstances in which web browser applications need to download a large number of small objects simultaneously. In HTTP 1.x browsers, one of the limited ways to achieve this is to transfer the objects over a large number of TCP connections. However, this leads to degraded performance because most of the transfer time is then spent in the TCP “slow start” phase because TCP starts slowly and is not designed to transfer small objects at high performance. Another way is to use pipelining, but this causes problems with “head of line blocking”.
One potential solution to this problem is known as SPDY. SPDY involves the multiplexing and interleaving of data for any number of objects over a single TCP connection. SPDY includes additional issues, though, such as serial transmission delay and “head of line blocking” where a lost packet for the object at the head causes other objects to have to wait to be read even if they have been transferred successfully. There is therefore a need for additional improvements for faster web browsing.