The explosive growth in enterprise data-centers to petabyte scale, combined with the growing need of business-critical applications for zero downtime, high performance, on-demand resource allocation, minimal windows for maintenance and resource reallocation, are forcing a change in the way administrators execute their tasks. There is a growing reliance on storage management frameworks that have evolved from collections of raw device configuration, performance and event data, to supporting intelligent wizards that exhaustively search for options and provide step-by-step guidance for decision-making.
Existing approaches for creating information models in Systems Management frameworks are typically non-adapting, i.e., models are created using a standard module such as linear, piece-wise linear, non-linear, etc. These approaches often have some the following limitations: non-scalable in computation time and resources with respect to peta-scale information technology (IT) growth, slow to react to rapid and bursty workload variations, and lower average accuracy as the entire history is maintained as a single model.
Given the growing trend towards dynamic virtualized data centers that require automation/planning it is necessary for algorithms to be “real-time” instead of offline background optimization. For instance, adapting information extraction algorithms to the data after creating non-linear models has a significantly higher overhead as compared to a linear model. Furthermore, algorithms vary in their susceptibility to missing data points in real-world monitoring.