It is a general demand for efficient download of data for network clients in communication networks such as the Internet. For example, it is well known that the loading speed in a network client is limited by the available connection bandwidth. However, bandwidth is not the only limiting factor.
FIG. 1 is a schematic diagram illustrating an example of the download time for the Wall Street Journal front page as a function of bandwidth for different latency values [1].
It is clear that available bandwidth is an important factor, but so is connection latency. Lower bandwidth means longer download times. Higher latency means longer download times. Latency is thus a significant problem and wireless networks often have inherently high latencies.
Slow web page download for wireless networks is indeed a Quality of Experience, QoE, problem, where end-users compare the mobile access with the higher performance fixed broadband networks.
From FIG. 1 it can also be seen that the utilization, which is shown as percentage values along the vertical dashed lines, is lower at high bit rates. It has also turned out to decrease with higher latencies. In particular, wireless networks with higher round trip times and delays can generally not efficiently utilize the access.
One way is to push content to the client before it actually needs it, using so-called pre-loading to enhance the download times. The object to be downloaded is assumed to be known and therefore it can be pushed to the browser cache. This means that the access can be fully utilized but it requires knowledge in-advance. One of the main goals is to create “a priori” knowledge about what the client will request.
In reference [1] it is suggested that when script loading halts the main parser, a side parser can go through the rest of the page code to find more resources to load, and the resources can also be prioritized so that scripts and style sheets load before images. With these optimizations the impact of network latency can be reduced and the download times improved, as schematically illustrated in FIG. 2.
However, a problem is to know the dynamic content in-advance. For a specific web site this may be done by the web site itself but it requires an optimized design of the web site which is not that common.
Transparent Internet caching will improve the performance for statically stored objects, but may only cache a subset of content. In the end, only a smaller part of the efficiency is gained.
Many objects in the web pages are dependent on states in the clients or web sites. Therefore, methods that cannot manage the dynamic nature of the web pages in some way will ultimately not give fast download times. Traditional caching techniques can usually mange the static part of the content but the dynamic objects are usually not cached. The classical caching solution has difficulties to accelerate the download of dynamic content of web pages.
Reference [2] proposes dynamic preloading of data to a proxy using multiple instances of the client, so-called shadow browsers, running in an intermediate network node between the original client and the remote web server. However, although this is a quite satisfactory solution able to handle dynamic content, it is processing intense since it requires multiple instances of the web browser for each client.
There is thus a demand for solutions for reducing the download times in an efficient manner, especially for wireless networks. It is also desirable to be able to handle both static and at least some of the dynamic content.