FIG. 1 illustrates a shared nothing network 100 used in accordance with the prior art. The shared nothing network or architecture 100 includes a master node 102 and a set of shared nothing nodes 104_A through 104_N. Each shared nothing node 104 has its own private memory, disks and input/output devices that operate independent of any other node in the architecture 100. Each node is self sufficient, sharing nothing across the network. Therefore, there are no points of contention across the system and no sharing of system resources. The advantage of this architecture is that it is highly scalable.
Enterprise database systems have been implemented on shared nothing networks. Such enterprise database systems are used to support Business Intelligence (BI) operations. With an ever increasing breadth of data sources integrated in data warehousing scenarios and advances in analytical processing, the classic categorizations of query workloads, such as Online Transaction Processing (OLTP), Online Analytical Processing (OLAP), loading, reporting, or massively concurrent queries have long been blurred. Mixed workloads have become a reality that today's database management systems have to be able to facilitate and support concurrently.
Processing of mixed workloads poses a series of interesting problems because different components of workloads compete for resources and, depending on the resource profiles, often impact each other negatively. This calls for mechanisms that allow users to assign priorities to different workloads that are then enforced by allotting resources accordingly.
The following list illustrates some of the most prominent scenarios of competing workloads with different priorities:
Loading vs. reporting. The quality of analytical processing relies, among other things, on the freshness of data as provided by periodic loads. Loads are typically performed in on-line fashion, i.e., the database system is used for reporting while loads are active. The timely completion of loads is essential for all further analyses and processing. A variant of this scenario are nightly loads. Periodic loads are usually assigned higher priority than reporting workloads.
Tactical vs. strategic analysis. Concurrently run reports may differ in their general importance to the business in terms of timeliness with which the results are needed for business decisions. Tactical analysis reports typically have near-term impact on business and are often assigned higher priority than strategic analysis reports.
Operational workloads. This references operational emergencies where administrators have to act quickly for damage control, e.g., rectify data contamination that is the result of faulty load procedures etc. These workloads should have precedence over other ongoing activity.
Operational safety. By assigning ad-hoc users' workloads appropriately low priorities, administrators can limit the impact of experimental and accidentally complex queries without having to monitor all activity on the system continuously or even deny users access preventatively.
Thus, it would be desirable to provide a mechanism for dynamic prioritization of database queries, where the mechanism appropriately balances competing workloads.