In many large enterprises, computing servers or other networked devices are distributed globally across a diverse computing network infrastructure. The various servers deployed in the network implement many different operating systems that execute a myriad of different software packages, applications, tools and the like. In many instances a single server may host many different software packages, applications or the like.
In order to manage such a diverse and complex computing infrastructure, such enterprises typically employ support teams whose job is it to keep the systems running and insure that risk to the systems are minimized. Frequently operating system (OS)/software manufacturers will release updates, in the form of patches, service packs or the like, that serve to minimize vulnerabilities and risks to their respective OS or software application. In this regard, many of the system updates/patches are deemed to be critical in addressing security fixes and, as such, it is imperative that the updates/patches be deployed throughout the computing infrastructure in a timely fashion.
However, in large enterprises, with many different computing environments and business units/lines-of-business (LOBs), timely deployment of the updates/patches is highly problematic. This is because the data associated with servers, business units/LOBs and other data relevant to deploying the updates is spread across many different data sources; each of which must be constantly monitored to assess risk, vulnerabilities and the like. While many of these different data sources are capable of generating log files and creating reports that indicate the risk, in today's enterprise environment support teams members are tasked with the highly manual process of pulling the reports from the data sources/systems, consolidating/reformatting the data, and implementing diverse business rules to result in a final list of which servers require updates/patches and the schedule for deploying such updates/patches. The manual process is not only inefficient and time-consuming, negatively impacting the critical nature of the deployment process, but also is prone to human error, in which servers requiring updates/patches may be inadvertently overlooked.
Therefore, a need exists to automate the process of server remediation in an enterprise-type computing infrastructure, such that the deployment of critical updates/patches across all computing servers requiring such is ensured and occurs within prescribed time limits. In this regard, a need exists to automatically extract data from all of the different data sources that contain data relevant to the update/patch process and automatically consolidate and transform/reformat the data to accommodate reporting needs and analytical research. In addition, a need exists to automatically determine the current state of the servers and the OSs, applications running thereon, so as to determine which servers require a pending update/patch. Moreover, a need exists to automatically determine optimal times for deploying the update/patch to each of the servers requiring such, scheduling the servers for deployment and implementing the deployment.