Guaranteeing performance and availability of critical resources in a computing environment requires ongoing collection and analysis of resource-capacity and resource-utilization data. It may be necessary to collect and quickly process large volumes of current data in order to allow support personnel to mitigate, resolve or avoid short-term problems or to identify a trend that might trigger such a problem in the future. Aggregating and analyzing such data in a timely manner may require a large, centralized computing resource and data repository capable of capturing and aggregating this large volume of heterogeneous data, culled from a variety of sources. But in more complex computing environments, even a mainframe computer may not be able to translate this raw data into customized analyses tailored to specific support personnel and deliver those analyses to the support personnel quickly enough to address imminent problems.