As applications presented via the world wide web have grown more and more complex in recent years, the problem of managing, maintaining, and testing such applications has grown more and more difficult. For example, many web sites presently offer such rich and varied functionality that it is difficult to test adequately an application or applications offered on the web site before making the application available to users. Similarly, the richness and depth of functionality offered by many such applications makes it difficult to train users to use them. Those managing and developing web applications are faced with the challenge of ensuring that such applications not only function according to their design, but that the web applications have been designed to be useable. Further, web application developers and managers are often faced with the challenge of determining how their web application compares to those offered by peers or competitors.
Products for managing, testing, and providing training for, web applications are presently available. Various companies, for example, Mercury Interactive Corporation of Mountain View, Calif., provide products that automate the process of testing software applications including web applications. These products work to ensure that new site changes do not break existing functionality. However, there is no guarantee that such products will comprehensively test or evaluate a web application. For one thing, the information provided by such products is highly dependent on the human beings who configure them. Moreover, it is rarely possible for a testing or evaluation product to be kept up to date with respect to the myriad of changes that frequently—often, daily—take place on many complex web sites. Accordingly, testing configurations presently are often very inflexible, and require a great deal of effort to implement and maintain. In sum, present testing and evaluation products often fail to detect significant quality issues with the web applications being tested and/or evaluated.
Failures to detect quality issues concerning major web sites occur despite the fact that most major web sites are regularly tested and/or evaluated by multiple groups of people, including people responsible for software quality assurance (QA), software and web site developers, marketing personnel, and/or user interface (UI) evaluators. Many web sites also receive feedback from users who encounter errors or usability issues. In addition, automated programs of the type discussed above may be in frequent, even daily, use. Moreover, many web site administrators review entries in web logs to identify and interpret error codes, although such entries are typically cryptic, and often meaningless without a visual representation of the error. In general, neither human testers and/or evaluators, nor automated programs, presently make systematic and structured use of information generated by a web application itself that may suggest areas of functionality that should be tested and/or evaluated. Thus, both humans and automated programs lack ways of systematically, in a structured, organized manner, identifying all or even most of the quality issues, including certain significant quality and/or usability issues, which are likely to present themselves to users.
Accordingly, there is a need for a web application testing and evaluation tool that uses information recorded and/or generated by the web site itself. Such a tool would advantageously provide information regarding usability and/or quality issues that would provide for more efficient maintenance and upgrading of the web application. Such a tool would advantageously pinpoint issues requiring user training, in addition to facilitating the methodical and structured training of users with respect to features and functions where such training is most needed. Moreover, such a tool would advantageously allow for issues that are difficult or impossible to detect by an automated process (e.g., finding listings in a directory that are practically duplicates but not character-for-character identical) to nonetheless be systematically analyzed.