Broadband Internet subscribers have a certain expectation of the quality of their experience using the Internet. For example, with web pages, Internet users have an expectation that pages will load within a ‘reasonable’ amount of time. What is reasonable varies, but it is generally agreed that the users expect a page to load in under 3 seconds. A recent Akamai™ survey found that 47 percent of consumers expect a load time of less than 2 seconds. In fact, 40 percent of respondents indicated that they would leave a site if it takes longer than 3 seconds to load (FORRESTER RESEARCH, INC., “eCommerce Web Site Performance Today—An Updated Look At Consumer Reaction To A Poor Online Shopping Experience” Aug. 17, 2009, Cambridge, Mass., USA).
Consumers also expect that increasing bandwidth will solve any Internet quality issues, but increased bandwidth alone may not improve page load time, or the load time for gaming or videos.
Consider that the load time for a web page is determined by a combination of:
a) Bandwidth speed and the size of a page;
b) Latency of the network (for example, between the client and DNS server, between the client and web server and the like);
c) Jitter of the network between the client and the server; and
d) ‘Think time’ of the server and the client, such as memory access, Javascript execution etc.
A typical website may have 10-20 unique transmission control protocol (TCP) connections required to load all of the content such as cookies, advertisements, HTML-content, images, Javascript libraries, etc. Web browsers have tried to address this situation through the parallelization of connections.
All things considered, and making the simplifying assumption that TCP instantly ramps up to the maximum speed of the Internet connection, then a typical webpage may load in something like the load time calculation of:Load Time=(page size/bandwidth)+[number of DNS lookups*(client-to-DNS server latency+client-to-DNS server jitter)]+[number of serial TCP connections*(client-to-servers latency+client-to-servers jitter)]
Conventional TCP employs a congestion management algorithm called AIMD (additive increase, multiplicative decrease). One of the aspects of this algorithm is referred to as ‘slow-start’ (Allman, M., Paxon, V., Stevens, W.; RFC 2581, TCP Congestion; Network Working Group, Standards Track; April 1999, http://www.ietf.org/rfc/rfc2581.txt, which is hereby incorporated by reference), which causes TCP to linearly increase in speed until a packet is lost, at which point it slows down and hovers around that rate. If packets are lost due to congestion, then TCP may cut its rate in half each time. The implication for web page loading may be that the many small TCP connections required for each site may never reach full speed, allowing latency and jitter to dominate the Load Time calculation, provided above. The page size and the bandwidth term may become an insignificant part of the calculation.
Therefore, there is a need for an improved system and method to minimize the effect of slow start in order to optimize load time.