The complexity of configuring computing systems represents a major impediment to efficient, error-free, and cost-effective deployment and management of computing systems of all scales, from handheld devices to desktop personal computers to small-business servers to enterprise-scale and global-scale information technology (IT) backbones. By way of example, configuring a computing system may encompass any process via which any of the system's structure, component inventory, topology, or operational parameters are persistently modified by a human operator or system administrator.
A computing system with a high degree of configuration complexity demands human resources to manage that complexity, increasing the total cost of ownership of the computing system. Likewise, complexity increases the amount of time that must be spent interacting with a computing system to configure it to perform the desired function, again consuming human resources and decreasing efficiency and agility. Finally, configuration complexity results in configuration errors, as complexity challenges human reasoning and results in erroneous decisions even by skilled operators.
Since the burdens of configuration complexity are so high, it is evident that computing system designers, architects, and implementers will seek to reduce configuration complexity, and likewise the purchasers, users, and managers of computing systems will seek to assemble systems with minimal configuration complexity. In order to do so, they must be able to quantitatively evaluate the degree of configuration complexity exposed by a particular computing system, i.e., designers, architects, and developers can evaluate the systems they build and optimize them for reduced complexity; purchasers, users, and managers can evaluate prospective purchases for complexity before investing in them. Furthermore, quantitative evaluation of configuration complexity can help computing service providers and outsourcers quantify the amount of human management that will be needed to provide a given service, allowing them to more effectively evaluate costs and set price points.
All these scenarios require standardized, representative, accurate, easily-compared quantitative assessments of configuration complexity, and motivate the need for a system and methods for quantitatively evaluating the configuration complexity of an arbitrary computing system.
The prior art of computing system evaluation includes no system or methods for quantitatively evaluating the configuration complexity of an arbitrary computing system. Well-studied computing system evaluation areas include system performance analysis, software complexity analysis, human-computer interaction analysis, and dependability evaluation.
System performance analysis attempts to compute quantitative measures of the performance of a computer system, considering both hardware and software components. This is a well-established area rich in analysis techniques and systems. However, none of these methodologies and systems for system performance analysis consider configuration-related aspects of the system under evaluation, nor do they collect or analyze configuration-related data. Therefore, system performance analysis provides no insight into the configuration complexity of the computing system being evaluated.
Software complexity analysis attempts to compute quantitative measures of the complexity of a piece of software code, considering both the intrinsic complexity of the code, as well as the complexity of creating and maintaining the code. However, processes for software complexity analysis do not collect configuration-related statistics or data and therefore provides no insight into the configuration complexity of the computing system running the analyzed software.
Human-computer interaction (HCI) analysis attempts to identify interaction problems between human users and computer systems, typically focusing on identifying confusing, error-prone, or inefficient interaction patterns. However, HCI analysis focuses on detecting problems in human-computer interaction rather than performing an objective, quantitative complexity analysis of that interaction. HCI analysis methods are not designed specifically for measuring configuration complexity, and typically do not operate on configuration-related data. In particular, HCI analysis collects human performance data from observations of many human users, and thus does not collect configuration-related data directly from a system under test.
Additionally, HCI analysis typically produces qualitative results suggesting areas for improvement of a particular user interface or interaction pattern and, thus, do not produce quantitative results that evaluate an overall configuration complexity of a system, independent of the particular user interface experience. The Model Human Processor approach to HCI analysis does provide objective, quantitative results; however, these results quantify interaction time for motor-function tasks like moving a mouse or clicking an on-screen button, and thus do not provide insight into computer system configuration complexity.
Finally, human-aware dependability evaluation combines aspects of objective, reproducible performance benchmarking with HCI analysis techniques with a focus on configuration-related problems, see, e.g., Brown et al., “Experience with Evaluating Human-Assisted Recovery Processes,” Proceedings of the 2004 International Conference on Dependable Systems and Networks, Los Alamitos, Calif., IEEE, 2004. This approach included a system for measuring configuration quality as performed by human users, but did not measure configuration complexity and did not provide reproducibility or objective measures.