1. Technical Field
Embodiments of the present invention generally relate to data center systems. More particularly, embodiments of the present invention relate to a method and apparatus for managing configurations of computer resources.
2. Description of the Related Art
Data centers are used to house mission critical computer systems and associated components. A data center includes environmental controls, such as air conditioning, fire suppression, and the like, redundant/backup power supplies, redundant data communications connections and high security among others. Typically, mid- to-large sized companies or organizations have one or more data centers. A bank, for example, may have a data center, where all its customers' account information is maintained, and transactions involving this data are performed. In another example, large cities may have multiple specific-purpose data center buildings in secure locations near telecommunications services. Most collocation centers and Internet peering points are located in these kinds of facilities.
Conventional enterprise data centers frequently accommodate thousands of servers, which are running hundreds of applications. In such centers, it is difficult to administer these servers so that all the servers are appropriately configured, patched, updated and the like, in accordance with the applications that the servers host.
In order to handle the aforementioned circumstances, the current practice is to utilize discovery tools to gather configuration data from the data center. Then, the configuration data is tested against a set of predefined rules, such as templates, reference configurations, gold standards, and the like, which are usually derived from ‘best practices’ or other Information Technology (IT) policies. If the test reveals a difference between the set of predefined rules and the configuration data, then a configuration is likely to be in violation of the predefined rules or anomalous. Eventually, the violations or anomalies are flagged for administrator attention. Furthermore, the difference also indicates that a resource within the data center is most likely misconfigured. Such misconfigured resources may cause performance and/or other issues for the data center.
Such violations or anomalies arise because not all applications, in conventional data centers, have a configuration reference template specified. In certain scenarios, even if the templates for some applications are specified, not all configuration parameters (or rules) may be codified. Some of the rules may potentially be overlooked owing to human error. Besides, the templates may be incomplete and/or incompletely implemented. Also, as data centers evolve over time, these rules have to be updated accordingly. In such evolution upgrades, the templates will lag behind the state of the data center. The lagging behind occurs because configuration sanity-checks on the templates take lower priority than keeping the applications available, updated and secure.
Data centers are usually managed in ‘silos.’ Within a given data center, storage administrators independently manage storage devices and specify their templates. More specifically, server templates are independently specified by server administrators, and so on. In such scenarios, configuration settings that span across these silos cannot be easily captured in templates. Thus, configuration errors that occur due to a lack of coordination among the administrators often remain undetected until the configuration data causes a performance issue.
As stated above, existing tools require a hard-coded set of rules against which the configuration data is checked. However, such tools fail to discover each and every configuration error. Moreover, domain expertise is needed to create this set of rules.
Accordingly, there is a need in the art for a method and apparatus for managing configurations to enforce data center compliance.