Over the past decade, the advent of Asynchronous JavaScript® (AJAX) technologies and the enormous increase in browser performance has enabled Internet browsers to run fully featured user applications. The majority of these applications make extensive use of the JavaScript® programming language (Java and all Java-based trademarks and logos are trademarks or registered trademarks of Oracle and/or its affiliates). As JavaScript® based web applications continue to grow in functionality, developers began to apply practices from traditional software development to cope with the increase in complexity. One of the most common examples is splitting out application classes into separate JavaScript® files. These files are then stored in common directories, indicating their relationship with each other. This practice gives the application the concept of separate “modules,” enabling easier reuse of modules across applications and allowing different parts of the application to load only those parts that are required.
To control the management and loading of required JavaScript® modules, the majority of JavaScript® toolkits provide helper utilities. These utilities take the path for a module and take over responsibility for loading and evaluating of that module. An application will use these utilities to ensure all required modules that it relies on are loaded before proceeding. When a loader needs to retrieve a new JavaScript® module, the loader will have to open a new HTTP connection back to the host that application is served from. This connection will be used to retrieve the associated JavaScript® file. If an application defines forty modules as requirements, this will mean the browser will have to process forty new HTTP connections.
Many Internet browsers have an in-built limit on the number of open connections that are allowed by a page back to a host. This limit is dependent on the browser being used and varies between two (Internet Explorer® 7) and fifteen (Firefox® 3.6+). JavaScript® module loaders traditionally expect the modules to reside on the same host as the page was served from.
The combination of browser connection limits along with JavaScript® modules results in increasing application loading times because the browser will only allow a fixed number of parallel resource requests. After the threshold for new connections is reached, all subsequent module loading will be suspended until an existing request has completed. The application is dependent on all modules being loaded and cannot proceed until this has been completed. This introduces an artificial delay into the loading time of an application. The more complicated the application, the more modules it will require and the longer the possible delay. Improving page load time is a critical factor for web applications. Slow applications are less likely to retain users.
A number of approaches already exist for reducing the loading time introduced by the modules pattern. The most popular approach involves building a “production” version of the application code. This takes the entire JavaScript® source code and runs it through a compiler offline. The compiler will compress and combine all the code into a single static file. In the live web application, this is the only file that needs requesting by the browser instead of each module individually. This approach will dramatically reduce the loading time caused by requiring multiple modules but at the cost of introducing a manual compilation step. However, every time the source code changes, this manual offline process will need to be repeated.
To build the production source file, a compiler needs a list of all possible modules upfront. The compiled code will include all the modules, which will always be loaded regardless of whether they are actually used in this usage of the application. Different users will make use of different code paths of a web application. It is likely that parts of the code will not be required in different scenarios, but the user pays an initial cost to load everything upfront.
A side-effect of compiling a single source file from an application's JavaScript® code is the effect on client-side caching. Browsers use client-side caching to remove the need for a client to download page elements which have not changed since the last page load. By providing the entire application code in a single file, any changes to a single module will cause the entire codebase to be refreshed, rather than just that module's code. Overall, this approach needs fewer connections, produces less delay but may cause more data to be downloaded.
Another approach, currently available in the Dojo® Toolkit, does not require the production of a single source file, but uses the compilation step to produce a lazy-loading version of the toolkit. The compiler parses and generates the base version of Dojo®, which registers stub classes for each of the base modules in Dojo® instead of actually loading that module's source. When the application tries to use a module function, the shell version will ensure the actual module source is loaded and registered. This means that nothing will be loaded until it is actually needed. While this solution removes the loading delay associated with JavaScript® modules, it still requires a manual compilation step to generate a static version of the source code. Any changes to the source code require the entire compilation process to be run again. This will cause any cached versions of files to be reloaded as the files have been regenerated even if they have not changed.
A final approach would be to distribute the modules across a number of servers with different hostnames. The browser would be able to open up multiple concurrent connections for modules from these different hosts. However, this would require access to multiple independent servers and need the module loader to have special code to handle loading from multiple hosts instead of the same host from which the application is served. In addition, the issue of loading all the code upfront, regardless of whether it is used, is unresolved.