A server-based application that can display a significant amount of data is typically delayed while a web server retrieves the data from a distributed set of resources and builds the page contents prior to transmitting the web page to a web browser.
Some conventional web servers or web applications may perform all discovery upon receiving a request from a web browser, waiting for all discovery to complete, constructing a web page, and transmitting the web page to the browser. The process can take a considerable amount of time during which the user may be idle, awaiting a response on the display, thus unable to work until a new page appears.
Other conventional web servers or web applications do not attempt to transfer pages that contain so much information that accumulation of requested data would take a prohibitively long time, but rather divide assembly of a web page into chunks. For example, a web server that has 1000 pieces of information to gather may split assembly into ten chunks of 100 pieces, each of the ten chunks being accumulated as a separate web page. A requesting web browser is then obligated to access the ten different web pages to view all of the data, a technique that does not allow the user to view all data at one time.
Another conventional technique uses a cache mechanism, adding overhead to ensure cache coherency and data freshness.