Over the past several years, computer server functionality has become increasingly modular. The rise of transaction processing, web services, eXtensible Markup Language (“XML”) interfaces, and similar technologies has resulted in a situation in which a lengthy execution chain involves one or more application servers, middleware servers, database servers, and/or the like, merely to service a single user request (e.g., an hypertext transfer protocol (“HTTP”) GET request from a web browser, or the like).
This modularity, however, has greatly complicated the task of software instrumentation and performance monitoring. For example, in an execution chain involving several different application components, it can be difficult to monitor the performance (and/or even confirm the execution) of a particular application component to determine, for example, which component is introducing performance bottlenecks and/or preventing the execution chain from completing successfully.
There have been proposed several solutions to the problem of instrumenting such modular application chains. For example, the '228 application (already incorporated by reference) describes several existing solutions, and the issues inherent to such solutions. The '228 application also discloses a framework and techniques that provide the ability to track the performance and reliability of different components in an application chain. To date, however, such application instrumentation solutions have been unable to instrument processes that occur inside of a database. For example, the '228 application describes a technique of recording when a database is called, and recording when a return is received from the database, as a proxy for measuring the performance of the database itself.
More generally, existing database performance metrics gathering and tracing solutions can only track the number of times a top-level step/function/code module is called within the database and/or the overall duration of the call chain in a given session or job, but they cannot provide any granularity below this level to show individual performance metrics for each nested call within the same database; likewise, existing solutions cannot provide instrumentation for nested calls across links/gateways to other remote databases (whether on the same platform or not). Existing solutions also lack the ability to reliably map database sessions back to application/browser sessions except through relative server timestamps, which are highly unreliable, especially in multi-threaded and multi-server environments.
There remains a need, however, for a solution that can instrument database performance more granularly.