The present invention relates generally to the field of navigating a portal site and providing web content, and in particular to a method for client-side aggregation of web content. The present invention relates further to an Extensible Stylesheet Language (XSL) transformer module for client-side aggregation of web content, a computing system, a data processing program, and/or a computer program product.
Today, information retrieval and access without Internet technologies are nearly impossible to imagine. With the beginning of the World Wide Web (WWW), a user requested a hypertext document from a server and the server returned the same static document for every same request. Every user received the same content for the same entered URL (universal resource locator).
With the emergence of dynamic client and server technologies, a server was able to generate dynamic content. Servers could provide complex and dynamic content for business applications and user interfaces, depending on a variety of factors, e.g., client request parameters, user preferences, or backend state. Now, portal server products use dynamic technologies and aggregate dynamic and static independent content from different sources or applications into an integrated content view. These content components are named portlets in a portal environment.
There are two types of content aggregation, server-side content aggregation and client-side content aggregation.
In general, referring to FIG. 2, a browser/client BC sends a request R to the server S and the server responses with the content of the web page's page content PC. The client loads linked resources like images, or style-sheet, or JavaScript files afterwards. Caches in-between, like browser cache C1 or network node cache C2, can save the response from the server and returns it as a response for next same requests depending on the response header information.
Referring to FIG. 3, in the server-side content aggregation case, the server S takes the request R from the client BC, processes it, aggregates the content in the server-side and of the different portlets together to the page content PC and responds with the logically complete content to the client. Every time, the full page content is returned from the server, the browser only renders it for the user. Search engines are easily able to search or index the page content. Every click on a new page content link triggers a full page change. The involved caches can only save and return the full content for an identically requested URL depending of the response header information. When the page content is aggregated on the server, the page's cache-ability is that of the least cacheable component, typically leading to a limited cache-ability.
In the client-side content aggregation case as opposed to the server-side aggregation, the server does not provide a logically complete content. It only sends the page content skeleton and bootstrap code to initialize the aggregation but does not immediately provide the content of portlets. The client renders the content skeleton and aggregates the content client-side in the view layer by loading and injecting the content of every required portlet with help of client side technologies prior art based on JavaScript (JS). A click to change the content of the current view is almost all done by manipulating the existing content with help of JS. JS also has to manage the client-side state of the current view. The involved caches can save and return the particular required content fragments individually.
An example of known client-side aggregation of web content uses information that is provided to a client application by aggregation on a client-side of a logically separated client-side/server-side computing environment. Information may be retained on the client-side that is displayed by a client application with required information provided by a portal application server on the server-side of the client-side/server-side computing environment.
Client-side-content aggregation has the following advantages over server-side content aggregation. It saves bandwidth on the server and network nodes due to individually content fragment caching. Server-side CPU cycles are offloaded to the client-side. It also may load the page content faster, improves user experience and reduces infrastructure cost.
But traditional client-side content aggregation has also some drawbacks. It often needs many complex JavaScripts to load and manipulate content, manage the client-side state and interpret common browser events, all of which results in error prone user interface code. There may also often be flickering or undesired motion in the page during the rendering because the portlet content is injected after the rendering process by the browser. This is an inevitable consequence of using JavaScript for the client-side approach. Additionally, due to browser incompatibility in usage of JavaScript, it is necessary to test the client-side code in every supported client platform. Moreover, there is only limited “crawlability” for search engines or browsers' content “bookmarkability” because the JavaScript code has to be executed before the content is completed. Today's web crawlers, however, do not execute JavaScript. Additional drawbacks include the following: Heavy usage of JavaScript and rare page switches are often the reason of client memory leaks in the browser. Since the step of aggregating the page out of a skeleton and portlet fragments is executed in the rendering step of the browser, the script used to aggregate the page can easily conflict with other scripts on the page. This kind of interoperability problems is hard or impossible to debug and resolve.